From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <peter(at)2ndquadrant(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: sortsupport for text |
Date: | 2012-06-14 19:32:25 |
Message-ID: | CA+Tgmobz4fXCSkicTf4M06xrjtrwO9f52GjGMcd=Oe7qz_2VRw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jun 14, 2012 at 3:24 PM, Peter Geoghegan <peter(at)2ndquadrant(dot)com> wrote:
> On 14 June 2012 19:28, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I thought that doubling repeatedly would be overly aggressive in terms
>> of memory usage. Blowing the buffers out to 8kB because we hit a
>> string that's a bit over 4kB isn't so bad, but blowing them out to
>> 256MB because we hit a string that's a bit over 128MB seems a bit
>> excessive.
>
> That's pretty much what all popular dynamic array implementations do,
> from C++'s std::vector to Python's list (it's a misnomer). Having to
> allocate 256MB for a buffer to contain a string a bit over 128MB may
> seem excessive, until you later get a 250MB string. Even if doubling
> is generally excessive, which I doubt, that's beside the point, which
> is that expanding the array by some constant proportion results in
> each insertion taking amortized constant time.
Yeah, but *it doesn't matter*. If you test this on strings that are
long enough that they get pushed out to TOAST, you'll find that it
doesn't measurably improve performance, because the overhead of
detoasting so completely dominates any savings on the palloc side that
you can't pick them out of the inter-run noise. Risking eating up an
extra 100MB of memory that we don't really need in order to obtain a
performance optimization that is far too small to measure does not
make sense. The case with std::vector is not analagous; they don't
have any way of knowing what other overhead you are incurring between
insertions into the vector, so it's reasonable to support that the
cost of the vector insertions themselves might be material. Here we
know that it doesn't matter, so the application of Knuth's first law
of optimization is appropriate.
>> Suppose we expand the buffer a
>> kilobyte at a time from an initial size of 1kB all the way out to
>> 256MB. That's 256,000 palloc calls, so we must be sorting at least
>> 256,000 datums, at least 128,000 of which are longer than 128MB. I
>> think the cost of calling memcpy() and strcoll() repeatedly on all
>> those long datums - not to mention repeatedly detoasting them - is
>> going to bludgeon the palloc overhead into complete insignificance.
>
> I fail to understand how this sortsupport buffer fundamentally differs
> from a generic dynamic array abstraction built to contain chars. That
> being the case, I see no reason not to just do what everyone else does
> when expanding dynamic arrays, and no reason why we shouldn't make
> essentially the same time-space trade-off here as others do elsewhere.
>
> Another concern is that it seems fairly pointless to have two buffers.
> Wouldn't it be more sensible to have a single buffer that was
> partitioned to make two logical, equally-sized buffers, given that in
> general each buffer is expected to grow at exactly the same rate?
Sure, but it would be making the code more complicated in return for
no measurable performance benefit. We generally avoid that.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Gurjeet Singh | 2012-06-14 19:36:12 | Re: Patch pg_is_in_backup() |
Previous Message | Peter Geoghegan | 2012-06-14 19:24:15 | Re: sortsupport for text |