From: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: drop/truncate table sucks for large values of shared buffers |
Date: | 2015-07-02 13:58:07 |
Message-ID: | 5595436F.6030009@iki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 07/02/2015 04:33 PM, Simon Riggs wrote:
> On 2 July 2015 at 14:08, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
>
>> On 06/27/2015 07:45 AM, Amit Kapila wrote:
>>
>>> Sometime back on one of the PostgreSQL blog [1], there was
>>> discussion about the performance of drop/truncate table for
>>> large values of shared_buffers and it seems that as the value
>>> of shared_buffers increase the performance of drop/truncate
>>> table becomes worse. I think those are not often used operations,
>>> so it never became priority to look into improving them if possible.
>>>
>>> I have looked into it and found that the main reason for such
>>> a behaviour is that for those operations it traverses whole
>>> shared_buffers and it seems to me that we don't need that
>>> especially for not-so-big tables. We can optimize that path
>>> by looking into buff mapping table for the pages that exist in
>>> shared_buffers for the case when table size is less than some
>>> threshold (say 25%) of shared buffers.
>>>
>>> Attached patch implements the above idea and I found that
>>> performance doesn't dip much with patch even with large value
>>> of shared_buffers. I have also attached script and sql file used
>>> to take performance data.
>>>
>>
>> I'm marking this as "returned with feedback" in the commitfest. In
>> addition to the issues raised so far, ISTM that the patch makes dropping a
>> very large table with small shared_buffers slower
>> (DropForkSpecificBuffers() is O(n) where n is the size of the relation,
>> while the current method is O(shared_buffers))
>
> There are no unresolved issues with the approach, nor is it true it is
> slower. If you think there are some, you should say what they are, not act
> high handedly to reject a patch without reason.
Oh, I missed the NBuffers / 4 threshold.
(The total_blocks calculation is prone to overflowing, btw, if you have
a table close to 32 TB in size. But that's trivial to fix)
>> I concur that we should explore using a radix tree or something else that
>> would naturally allow you to find all buffers for relation/database X
>> quickly.
>
> I doubt that it can be managed efficiently while retaining optimal memory
> management. If it can its going to be very low priority (or should be).
>
> This approach works, so lets do it, now. If someone comes up with a better
> way later, great.
*shrug*. I don't think this is ready to be committed. I can't stop you
from working on this, but as far as this commitfest is concerned, case
closed.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Syed, Rahila | 2015-07-02 14:00:41 | Re: [PROPOSAL] VACUUM Progress Checker. |
Previous Message | Heikki Linnakangas | 2015-07-02 13:57:58 | Re: drop/truncate table sucks for large values of shared buffers |