From: | Marco Colombo <pgsql(at)esiway(dot)net> |
---|---|
To: | Alex Stapleton <alexs(at)advfn(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org, vinita bansal <sagivini(at)hotmail(dot)com>, Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
Subject: | Re: RAMFS with Postgres |
Date: | 2005-07-26 14:08:31 |
Message-ID: | 1122386912.3470.62.camel@Frodo.esi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 2005-07-22 at 15:56 +0100, Alex Stapleton wrote:
> On 21 Jul 2005, at 17:02, Scott Marlowe wrote:
>
> > On Thu, 2005-07-21 at 02:43, vinita bansal wrote:
> >
> >> Hi,
> >>
> >> My application is database intensive. I am using 4 processes since
> >> I have 4
> >> processeors on my box. There are times when all the 4 processes
> >> write to the
> >> database at the same time and times when all of them will read all
> >> at once.
> >> The database is definitely not read only. Out of the entire
> >> database, there
> >> are a few tables which are accessed most of the times and they are
> >> the ones
> >> which seem to be the bottleneck. I am trying to get as much
> >> performance
> >> improvement as possible by putting some of these tables in RAM so
> >> that they
> >> dont have to be read to/written from hard disk as they will be
> >> directly
> >> available in RAM. Here's where slony comes into picture, since
> >> we'll have to
> >> mainatin a copy of the database somewhere before running our
> >> application
> >> (everything in RAM will be lost if there's a power failure or
> >> anything else
> >> goes wrong).
> >>
> >> My concern is how good Slony is?
> >> How much time does it take to replicate database? If the time
> >> taken to
> >> replicate is much more then the perf. improvement we are getting
> >> by putting
> >> tables in memory, then there's no point in going in for such a
> >> solution. Do
> >> I have an alternative?
> >>
> >
> > My feeling is that you may be going about this the wrong way. Most
> > likely the issue so far has been I/O contention. Have you tested your
> > application using a fast, battery backed caching RAID controller on
> > top
> > of, say, a 10 disk RAID 1+0 array? Or even RAID 0 with another
> > machine
> > as the slony slave?
>
> Isn't that slightly cost prohibitive? Even basic memory has
> enormously fast access/throughput these days, and for a fraction of
> the price.
We are comparing a RAM + network solution vs. a RAM + disk solution. RAM
alone in not enough, since the OP wants 100% safety of data. Then you
need a network solution, and it has to be synchronous if you want 100%
safety. No network is going to beat a directly attached disk array on
the basis of performance/price.
> > Slony, by the way, is quite capable, but using a RAMFS master and a
> > Disk
> > drive based slave is kind of a recipe for disaster in ANY replication
> > system under heavy load, since it is quite possible that the master
> > could get very far ahead of the slave, since Slony is asynchronous
> > replication. At some point you could have more data waiting to be
> > replicated than your ramfs can hold and have some problems.
> >
> > If a built in RAID controller with battery backed caching isn't
> > enough,
> > you might want to look at a large, external storage array then. many
> > hosting centers offer these as a standard part of their package, so
> > rather than buying one, you might want to just rent one, so to speak.
>
> Again with the *money* RAM = Cheap. Disks = Expensive. At least when
> you look at speed/$. Your right about replicating to disk and to ram
> though, that is pretty likely to result in horrible problems if you
> don't keep load down. For some workloads though, I can see it
> working. As long as the total amount of data doesn't get larger than
> your RAMFS it could probably survive.
Ever heard of the page cache? If your data fits your RAMFS, it would fit
the OS cache just the same. For reads, the effect is exactly the same.
And just disable fsync if writes are a problem. It's anyway safer than
RAMFS, even if not 100% safe.
Face it, if you want 100% safety (loosing nothing in case of power
failure), you need to synchronously write to _some_ disk platter. Where
this disk is attached to, it's a matter of convenience. _If_ disk write
throughput _is_ the problem, you have to fix it. Be it on the local
host, or on a remote replica server, the disk system has to be fast
enough.
Consider:
1) PostgreSQL -> RAM -> disk
2) PostgreSQL -> RAM -> network ----------------> network -> RAM -> disk
no matter if you choose 1) or 2), the "disk" part has to be fast enough.
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ Colombo(at)ESI(dot)it
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2005-07-26 14:14:12 | Re: pgsql client/server compatibility matrix? |
Previous Message | Philippe Lang | 2005-07-26 14:07:21 | Re: Trigger disactivation and SELECT WAITING |