From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] sort on huge table |
Date: | 1999-10-14 01:34:03 |
Message-ID: | 199910140134.KAA10375@ext16.sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > The current sorting code will fail if the data volume exceeds whatever
> > the maximum file size is on your OS. (Actually, if long is 32 bits,
> > it might fail at 2gig even if your OS can handle 4gig; not sure, but
> > it is doing signed-long arithmetic with byte offsets...)
>
> > I am just about to commit code that fixes this by allowing temp files
> > to have multiple segments like tables can.
>
> OK, committed. I have tested this code using a small RELSEG_SIZE,
> and it seems to work, but I don't have the spare disk space to try
> a full-scale test with > 4Gb of data. Anyone care to try it?
I will test it with my 2GB table. Creating 4GB would probably be
possible, but I don't have enough sort space for that:-) I ran my
previous test on 6.5.2, not on current. I hope current is stable
enough to perform my testing.
> I have not yet done anything about the excessive space consumption
> (4x data volume), so plan on using 16+Gb of diskspace to sort a 4+Gb
> table --- and that's not counting where you put the output ;-)
Talking about the -S, I did use the default since setting -S seems to
consume too much memory. For example, if I set it to 128MB, backend
process grows over 512MB and it was killed due to swap space was run
out. Maybe 4x law can be also applicated to -S?
---
Tatsuo Ishii
From | Date | Subject | |
---|---|---|---|
Next Message | Lincoln Yeoh | 1999-10-14 07:51:38 | Re: [GENERAL] How do I activate and change the postgres user's password? |
Previous Message | Tatsuo Ishii | 1999-10-14 01:33:41 | Re: [HACKERS] psql Week 2 |