From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Edmund Dengler <edmundd(at)eSentire(dot)com>, pgsql-general(at)postgresql(dot)org, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [HACKERS] INHERITS and planning |
Date: | 2005-06-15 21:29:26 |
Message-ID: | 1118870966.3645.104.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
On Fri, 2005-06-10 at 02:10 -0400, Tom Lane wrote:
> What I see in the profile is
>
> % cumulative self self total
> time seconds seconds calls s/call s/call name
> 42.04 15.58 15.58 9214 0.00 0.00 list_nth_cell
> 20.29 23.10 7.52 34524302 0.00 0.00 SHMQueueNext
> 8.34 26.19 3.09 29939 0.00 0.00 LockCountMyLocks
> 5.64 28.28 2.09 2960617 0.00 0.00 AllocSetAlloc
> 2.37 29.16 0.88 2354 0.00 0.00 AllocSetCheck
> 2.29 30.01 0.85 302960 0.00 0.00 hash_search
> 2.13 30.80 0.79 2902873 0.00 0.00 MemoryContextAlloc
Looks bad... but how does it look for 1000 inherited relations? My
feeling is that we should not be optimizing the case above 1000
relations. That many partitions is very unwieldy.
If you really do need that many, you can go to the trouble of grouping
them in two levels of nesting, so you have a root table, multiple month
tables and then each month table with multiple day tables (etc).
> What I'm more interested in at the moment are the next two entries,
> SHMQueueNext and LockCountMyLocks --- it turns out that almost all the
> SHMQueueNext calls are coming from LockCountMyLocks, which is invoked
> during LockAcquire. This is another O(N^2) loop, and it's really a
> whole lot nastier than the rangetable ones, because it executes with the
> LockMgrLock held.
ISTM that having LockAcquire as a stateless call isn't much use here.
Surely, caching the number of locks so we can avoid the call entirely
when making repeated calls is the way to go...
> I spent a little time trying to see if we could avoid doing
> LockCountMyLocks altogether, but it didn't look very promising.
Or is that what you meant?
> What
> I am thinking though is that we could implement LockCountMyLocks as
> either a scan through all the proclocks attached to the target proc
> (the current way) or as a scan through all the proclocks attached to
> the target lock (proclocks are threaded both ways). There is no hard
> upper bound on the number of locks a proc holds, whereas there is a
> bound of MaxBackends on the number of procs linked to a lock. (Well,
> theoretically it could be 2*MaxBackends considering the possibility
> of session locks, but that could only happen if all the backends are
> trying to vacuum the same relation.) So it seems like it might be a win
> to scan over the per-lock list instead. But I'm very unsure about
> what the *average* case is, instead of the worst case.
Changing that behaviour would effect all other call locations, so I'm
not sure I'd want an optimization of this rare case to have such a far
reaching effect.
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Nico Callewaert | 2005-06-15 21:54:11 | Installation on latest version of Suse Linux |
Previous Message | Simon Riggs | 2005-06-15 21:18:25 | Re: INHERITS and planning |
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Sherry | 2005-06-16 01:07:31 | Re: Autovacuum in the backend |
Previous Message | Simon Riggs | 2005-06-15 21:18:25 | Re: INHERITS and planning |