From: | Bill Moran <wmoran(at)potentialtech(dot)com> |
---|---|
To: | William Yu <wyu(at)talisys(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Maximum Possible Insert Performance? |
Date: | 2003-11-24 13:38:31 |
Message-ID: | 3FC209D7.5080303@potentialtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
William Yu wrote:
> My situation is this. We have a semi-production server where we
> pre-process data and then upload the finished data to our production
> servers. We need the fastest possible write performance. Having the DB
> go corrupt due to power loss/OS crash is acceptable because we can
> always restore from last night and re-run everything that was done since
> then.
>
> I already have fsync off. Short of buying more hardware -- which I will
> probably do anyways once I figure out whether I need more CPU, memory or
> disk -- what else can I do to max out the speed? Operation mix is about
> 50% select, 40% insert, 10% update.
In line with what Tom Lane said, you may want to look at the various
memory databases available (I'm not familiar with any one to recommend,
though) If you can fit the whole database in RAM, that would work
great, if not, you may be able to split the DB up and put the most
used tables just in the memory database.
I have also seen a number tutorials on how to put a database on a
RAM disk. This helps, but it's still not as fast as a database server
that's designed to keep all its data in RAM.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
From | Date | Subject | |
---|---|---|---|
Next Message | Rajesh Kumar Mallah | 2003-11-24 17:13:59 | VACUUM problems with 7.4 |
Previous Message | Nick Barr | 2003-11-24 11:24:47 | Re: pg_autoconfig.pl |