Re: [HACKERS] sort on huge table

From: Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: t-ishii(at)sra(dot)co(dot)jp, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] sort on huge table
Date: 1999-10-18 06:08:59
Message-ID: 199910180608.PAA01008@srapc451.sra.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
>> I have done the 2GB test on current (with your fixes). This time the
>> sorting query worked great! I saw lots of temp files, but the total
>> disk usage was almost same as before (~10GB). So I assume this is ok.
>
>I have now committed another round of changes that reduce the temp file
>size to roughly the volume of data to be sorted. It also reduces the
>number of temp files --- there will be only one per GB of sort data.
>If you could try sorting a table larger than 4GB with this code, I'd be
>much obliged. (It *should* work, of course, but I just want to be sure
>there are no places that will have integer overflows when the logical
>file size exceeds 4GB.) I'd also be interested in how the speed
>compares to the old code on a large table.
>
>Still need to look at the memory-consumption issue ... and CREATE INDEX
>hasn't been taught about any of these fixes yet.

I tested with a 1GB+ table (has a segment file) and a 4GB+ table (has
four segment files) and got same error message:

ERROR: ltsWriteBlock: failed to write block 131072 of temporary file
Perhaps out of disk space?

Of course disk space is enough, and no physical errors were
reported. Seems the error is raised when the temp file hits 1GB?
--
Tatsuo Ishii

Browse pgsql-hackers by date

  From Date Subject
Next Message Oleg Bartunov 1999-10-18 06:20:28 Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?
Previous Message Hiroshi Inoue 1999-10-18 05:40:47 RE: [HACKERS] mdnblocks is an amazing time sink in huge relations