From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | Luke Lonergan <llonergan(at)greenplum(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Merge algorithms for large numbers of "tapes" |
Date: | 2006-03-08 16:20:50 |
Message-ID: | 15725.1141834850@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Jim C. Nasby" <jnasby(at)pervasive(dot)com> writes:
> If we do have to fail to disk, cut back to 128MB, because having 8x that
> certainly won't make the sort run anywhere close to 8x faster.
Not sure that follows. In particular, the entire point of the recent
changes has been to extend the range in which we can use a single merge
pass --- that is, write the data once as N sorted runs, then merge them
in a single read pass. As soon as you have to do an actual merge-back-
to-disk pass, your total I/O volume doubles, so there is definitely a
considerable gain if that can be avoided. And a larger work_mem
translates directly to fewer/longer sorted runs.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-03-08 16:26:00 | Re: Add switches for DELIMITER and NULL in pg_dump COPY |
Previous Message | David Fetter | 2006-03-08 16:20:39 | Re: [PATCHES] Add switches for DELIMITER and NULL in pg_dump COPY |