From: | "Dann Corbit" <DCorbit(at)connx(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Luke Lonergan" <llonergan(at)greenplum(dot)com>, "Simon Riggs" <simon(at)2ndquadrant(dot)com>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Merge algorithms for large numbers of "tapes" |
Date: | 2006-03-09 02:05:01 |
Message-ID: | D425483C2C5C9F49B5B7A41F8944154757D5EF@postal.corporate.connx.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> -----Original Message-----
> From: Jim C. Nasby [mailto:jnasby(at)pervasive(dot)com]
> Sent: Wednesday, March 08, 2006 5:44 PM
> To: Dann Corbit
> Cc: Tom Lane; Luke Lonergan; Simon Riggs; pgsql-hackers(at)postgresql(dot)org
> Subject: Re: [HACKERS] Merge algorithms for large numbers of "tapes"
>
> On Wed, Mar 08, 2006 at 03:35:53PM -0800, Dann Corbit wrote:
> > I think I did not explain it clearly enough. Suppose that you have
a
> > set of rows you need to sort. Instead of loading the whole row into
> > memory, just load the columns (or parts of columns) that are being
> > sorted. I hope that it is more clear now.
>
> The issue is that there is a non-trivial amount of overhead in going
> back to disk to get the raw data, and then you have to parse that into
a
> valid in-memory tuple. A worst-case scenario is if you're sorting all
> the data that you've been asked to retrieve, ie:
>
> SELECT a, b, c ... ORDER BY b, a, c;
>
> That case is almost guaranteed to take longer if you try and do it
with
> just pointers.
>
> But there is the other case:
>
> SELECT a, b, c, big_honking_text_field ... ORDER BY a, b, c;
>
> In this example it's entirely possible that leaving the big_honking
> field out of the actual sorting would be a big win. Especially if your
> temporary space was on a different set of spindles.
>
> Regarding your suggestion of testing different kinds of sorts, that's
> certainly a good idea if it can be done without a huge amount of work
> coding each one up. Ultimately, it might make the most sense to
support
> multiple sort algorithms (at least for now) and let the planner decide
> which one to use. That would at least get us a lot more real-world
data
> than any other method would.
I typically do it something like this:
MSD_Radix_Sort_Hunks()
{
// We might have to bail for many reasons :
// Early part of the key may be identical for all rows
// We may not have a binning algorithm for this data type
// We may also only partially sort with MSD Radix sort
If (Set_Is_Too_Small_Or_Otherwise_Bail())
{
Introspective_Sort_Hunks();
}
Else
MSD_Radix_Alg(); // Cookie cutter of data stream into sorted hunks
}
Introspective_Sort_Hunks()
{
If (Set_Is_Too_Small_Or_Otherwise_Bail())
{
Ford_Johnson_Variant(); // Near optimal sort of very small sets
}
Else
Introspective_Alg();// Cookie cutter of data stream into sorted hunks
}
Queue_based_hunk_merge();
Now, you might have a merge that makes choices on entry similar to the
way that my sorts make choices on entry.
You will notice that my sorts decide internally on what algorithm to
perform. Certainly, this is a simple approach that can generalize in
many ways.
> --
> Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
> Pervasive Software http://pervasive.com work: 512-231-6117
> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2006-03-09 02:08:46 | Re: Merge algorithms for large numbers of "tapes" |
Previous Message | Jim C. Nasby | 2006-03-09 01:44:08 | Re: Merge algorithms for large numbers of "tapes" |