From: | Hannu Krosing <hannu(at)skype(dot)net> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Luke Lonergan <llonergan(at)greenplum(dot)com>, Dann Corbit <DCorbit(at)connx(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Merge algorithms for large numbers of "tapes" |
Date: | 2006-03-09 08:37:01 |
Message-ID: | 1141893421.3810.5.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Ühel kenal päeval, K, 2006-03-08 kell 20:08, kirjutas Jim C. Nasby:
> But it will take a whole lot of those rewinds to equal the amount of
> time required by an additional pass through the data.
I guess that missing a sector read also implies a "rewind", i.e. if you
don't process the data read from a "tape" fast enough, you will have to
wait a whole disc revolution (~== "seek time" on modern disks) before
you get the next chunk of data.
> I'll venture a
> guess that as long as you've got enough memory to still read chunks back
> in 8k blocks that it won't be possible for a multi-pass sort to
> out-perform a one-pass sort. Especially if you also had the ability to
> do pre-fetching (not something to fuss with now, but certainly a
> possibility in the future).
>
> In any case, what we really need is at least good models backed by good
> drive performance data.
And filesystem performance data, as postgres uses OS-s native
filesystems.
--------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Zeugswetter Andreas DCP SD | 2006-03-09 09:56:26 | Re: Merge algorithms for large numbers of "tapes" |
Previous Message | Stefan Kaltenbrunner | 2006-03-09 07:34:34 | Re: problem with large maintenance_work_mem settings and |