From: | Andrew Sullivan <andrew(at)libertyrms(dot)info> |
---|---|
To: | PostgreSQL general list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: large file limitation |
Date: | 2002-01-19 18:46:50 |
Message-ID: | 20020119134650.B8903@mail.libertyrms.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Jan 18, 2002 at 08:51:47PM -0500, Tom Lane wrote:
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
Right. Sorry if I wasn't clear about that; I know that Postgres
itself never writes a file bigger than 1 Gig, but pg_dump and
pg_restore can easily pass that limit.
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
It will. If you check 'man largefiles' on Solaris (7 anyway; I don't
know about other versions) it will tell you what basic Solaris system
binaries are large file aware. /usr/bin/split is one of them, as is
/usr/bin/compress. We are working in a hosted environment, and I
didn't completely trust the hosts not to drop one of the files when
sending them to tape, or I would have used split instead of
recompiling.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info> M6K 3E3
+1 416 646 3304 x110
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2002-01-19 18:48:02 | Re: large file limitation |
Previous Message | Ryan Kirkpatrick | 2002-01-19 18:28:45 | How does one return rows from plpgsql functions? |