From: | Dustin Sallings <dustin(at)spy(dot)net> |
---|---|
To: | "Anjan Dave" <adave(at)vantage(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: can't handle large number of INSERT/UPDATEs |
Date: | 2004-10-26 02:29:19 |
Message-ID: | D6DB28FD-26F6-11D9-AD86-000A957659CC@spy.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Oct 25, 2004, at 13:53, Anjan Dave wrote:
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then
> everything piles up, with the load average shooting up to 10+.
Depending on your requirements and all that, but I had a similar issue
in one of my applications and made the problem disappear entirely by
serializing the transactions into a separate thread (actually, a thread
pool) responsible for performing these transactions. This reduced the
load on both the application server and the DB server.
Not a direct answer to your question, but I've found that a lot of
times when someone has trouble scaling a database application, much of
the performance win can be in trying to be a little smarter about how
and when the database is accessed.
--
SPY My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin(at)spy(dot)net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2004-10-26 05:17:40 | Re: [PATCHES] ARC Memory Usage analysis |
Previous Message | Tom Lane | 2004-10-25 23:11:05 | Re: [PATCHES] ARC Memory Usage analysis |