From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Ron Peacetree <rjpeace(at)earthlink(dot)net> |
Cc: | Dann Corbit <DCorbit(at)connx(dot)com>, pgsql-hackers(at)postgresql(dot)org, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: [PERFORM] A Better External Sort? |
Date: | 2005-09-27 01:42:18 |
Message-ID: | 6141.1127785338@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Ron Peacetree <rjpeace(at)earthlink(dot)net> writes:
> Let's start by assuming that an element is <= in size to a cache line and a
> node fits into L1 DCache. [ much else snipped ]
So far, you've blithely assumed that you know the size of a cache line,
the sizes of L1 and L2 cache, and that you are working with sort keys
that you can efficiently pack into cache lines. And that you know the
relative access speeds of the caches and memory so that you can schedule
transfers, and that the hardware lets you get at that transfer timing.
And that the number of distinct key values isn't very large.
I don't see much prospect that anything we can actually use in a
portable fashion is going to emerge from this line of thought.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2005-09-27 01:48:34 | Re: Database file compatability |
Previous Message | Andrew Dunstan | 2005-09-27 01:27:28 | Re: State of support for back PG branches |
From | Date | Subject | |
---|---|---|---|
Next Message | Qingqing Zhou | 2005-09-27 01:43:14 | Re: Query seem to slow if table have more than 200 million rows |
Previous Message | Ron Peacetree | 2005-09-27 01:10:47 | Re: [PERFORM] A Better External Sort? |