From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Sorting Improvements for 8.4 |
Date: | 2007-12-03 11:51:49 |
Message-ID: | 1196682709.4228.32.camel@ebony.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 2007-11-30 at 12:07 -0800, Jeff Davis wrote:
> On Tue, 2007-11-27 at 18:03 +0000, Simon Riggs wrote:
> > 5. DYNAMIC RUN HANDLING (in Final Merge)
> >
> > Another way of addressing a) is to simply make better use of memory
> > itself. Let's look at that in more detail:
> >
> > Number of runs that can be merged at once is currently fixed, based upon
> > available memory. This has the underlying assumption that all runs will
> > be concurrently active during final merging, which may not always be
> > true.
> >
> > If we have random data then almost all runs will overlap with all other
> > runs, i.e. the min and max values are sufficiently wide that the runs do
> > all overlap. In many cases, data arrives in somewhat sorted order, e.g.
> > financial data is fairly regular with some late payers but not many, and
> > those trail off with a fairly tight decay. In the somewhat sorted case
> > we find that the actual overlap is less than total, so there are many
> > later runs that don't overlap the earlier ones. In the best case we
> > would find run 1 and 2 overlap, runs 2 and 3 overlap, then 3 and 4
> > overlap.
>
> I have spoken with Len Shapiro, a professor at Portland State
> University, regarding sorting before.
>
> He suggests that PostgreSQL should implement forecasting, which is
> similar to what you're describing. Forecasting does not require that
> entire runs are disjoint, it works by tracking the maximum values from
> the last block read from every run. This allows you to know which run
> you will need more blocks from the soonest.
>
> I'm still looking into the problem to understand it better, but the
> algorithm is in Knuth Vol 3.
>
> I can look at it in more detail, but have you already looked into this
> idea? Is there a reason we don't do this currently?
Interesting, I hadn't read that part.
Knuth's Algorithm F covers how to do a P-way merge using 2P + 2 buffers.
My ideas cover how to do a P-way merge when you don't have enough memory
for that many buffers.
The current sort code makes two assumptions, amongst others
1. minimizing number of runs is always worth it
2. there is a single fixed maximum size of P, depending upon memory
I'm challenging both of those. Only runs that overlap need to be merged
simultaneously, so if the runs aren't overlapping then its OK to allow
more runs to be formed. If its OK to allow more runs, then reducing heap
size to allow better CPU efficiency is possible.
So Algorithm F is somewhat orthogonal to what I've proposed, though
maybe still interesting. What we do now is fairly close, but you should
look at the code in tuplesort.c and logtape.c to see how well it
matches. That might lead to an increase in the limit of the number of
concurrent runs mergeable at any one time.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Dragan Zubac | 2007-12-03 12:16:02 | Re: [HACKERS] Stored procedure issue |
Previous Message | cinu | 2007-12-03 11:19:22 | Regression testing |