From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: User concurrency thresholding: where do I look? |
Date: | 2007-07-19 17:45:15 |
Message-ID: | 14009.1184867115@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Tom Lane wrote:
>> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock
>> space entirely;
> Well, if there still is shared memory available, the lock hash can
> continue to grow, but it would slow down according to this comment in
> ShmemInitHash:
Right, but there's not an enormous amount of headroom in shared memory
beyond the intended size of the hash tables. I'd think that you'd start
seeing hard failures not very far beyond the point at which performance
impacts became visible. Of course this is all speculation; I quite
agree with varying the table-size parameters to see if it makes a
difference.
Josh, what sort of workload is being tested here --- read-mostly,
write-mostly, a mixture?
> However I was talking to Josh Drake yesterday and he told me that
> pg_dump was spending some significant amount of time in LOCK TABLE when
> there are lots of tables (say 300k).
I wouldn't be too surprised if there's some O(N^2) effects when a single
transaction holds that many locks, because of the linked-list proclock
data structures. This would not be relevant to Josh's case though.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2007-07-19 18:27:45 | Re: User concurrency thresholding: where do I look? |
Previous Message | Alvaro Herrera | 2007-07-19 17:37:04 | Re: User concurrency thresholding: where do I look? |