From: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Luke Lonergan <llonergan(at)greenplum(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Merge algorithms for large numbers of "tapes" |
Date: | 2006-03-08 17:49:04 |
Message-ID: | 20060308174904.GD45250@pervasive.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 08, 2006 at 11:20:50AM -0500, Tom Lane wrote:
> "Jim C. Nasby" <jnasby(at)pervasive(dot)com> writes:
> > If we do have to fail to disk, cut back to 128MB, because having 8x that
> > certainly won't make the sort run anywhere close to 8x faster.
>
> Not sure that follows. In particular, the entire point of the recent
> changes has been to extend the range in which we can use a single merge
> pass --- that is, write the data once as N sorted runs, then merge them
> in a single read pass. As soon as you have to do an actual merge-back-
> to-disk pass, your total I/O volume doubles, so there is definitely a
> considerable gain if that can be avoided. And a larger work_mem
> translates directly to fewer/longer sorted runs.
But do fewer/longer sorted runs translate into not merging back to disk?
I thought that was controlled by if we had to be able to rewind the
result set.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From | Date | Subject | |
---|---|---|---|
Next Message | Joachim Wieland | 2006-03-08 18:03:56 | Re: Status of TODO: postgresql.conf: reset to default when |
Previous Message | Ben Chelf | 2006-03-08 17:20:24 | Re: Coverity Open Source Defect Scan of PostgreSQL |