From: | Andrew Sullivan <andrew(at)libertyrms(dot)info> |
---|---|
To: | Jeff <jeff(dot)brickley(at)motorola(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: large file limitation |
Date: | 2002-01-18 19:39:35 |
Message-ID: | 20020118143935.G26828@mail.libertyrms.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jan 10, 2002 at 01:10:35PM -0800, Jeff wrote:
> handle files larger than 2GB. I then dumped the database again and
> noticed the same situation. The dump files truncate at the 2GB limit.
We just had the same happen recently.
> I suppose I need to recompile Postgres now on the system now that it
> accepts large files.
Yes.
> Is there any library that I need to point to manually or some
> option that I need to pass in the configuration? How do I ensure
> Postgres can handle large files (>2GB)
Yes. It turns out that gcc (and maybe other C compilers; I don't
know) doesn't turn on the 64-bit offset by default. You need to add a
CFLAGS setting. The necessaries can be found with
CFLAGS="`getconf LFS_CFLAGS`"
(I stole that from the Python guys:
<http://www.python.org/doc/current/lib/posix-large-files.html>).
Note that this will _not_ compile the binary as a 64-bit binary, so
using "file" to check it will still report a 32-bit binary.
Everything I've read about the subject suggests that gcc-compiled
64-bit binaries on Solaris are sort of flakey, so I've not tried it.
Hope this is helpful.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info> M6K 3E3
+1 416 646 3304 x110
From | Date | Subject | |
---|---|---|---|
Next Message | Jason Earl | 2002-01-18 19:52:24 | Re: sharing data accross several databases |
Previous Message | Doug McNaught | 2002-01-18 19:31:31 | Re: Apache module for native PostgreSQL webpage serving |