From: | Ivan Richwalski <ivan(at)seppuku(dot)net> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Solaris 2.6 and large tables |
Date: | 1998-08-25 21:38:22 |
Message-ID: | v04011701b208c8c70d13@[207.67.69.17] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I've been working with some really large tables in Postgres
( both 6.2.1 and in preparing to upgrade to 6.3.2 ) running on
Solaris 2.6 on an Ultra1. When the amount of data in any one
table reaches 2Gig, the postgres client connection will just hang.
Any subsequent connections will hang until the first offending
process is killed, at which point other connections will complete
successfully. Other queries will work unless one tries to access
the data that was added at the end of the table, at which point
it will hang until it is killed again.
I have been able to work around the problem by compiling 6.3.2
with the Solaris options for using 64bit files. To make the change,
I added the flags to the sparc_solaris-gcc template file, and changed
the couple of occurrences of "fseek" in src/backend/utils/sort/psort.c
to "fseeko" just incase. It compiles, all of the regression tests
do the expected things, and the "table.1" file gets created when the
table exceeds the 2Gig mark. Everything appears to be functioning
fine, and I was wondering if anyone else has had any experiences with
similar situations.
Ivan Richwalski
--
"Look, a huge distracty thing." -- Tom Servo
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 1998-08-26 03:02:36 | Re: [ADMIN] Solaris 2.6 and large tables |
Previous Message | Jose David Martinez Cuevas | 1998-08-25 15:22:14 | Re: [SQL] copy probs |