From: | Jan Wieck <janwieck(at)yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jan Wieck <janwieck(at)yahoo(dot)com>, Andrew Sullivan <andrew(at)libertyrms(dot)info>, Jeff <jeff(dot)brickley(at)motorola(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: large file limitation |
Date: | 2002-01-19 01:56:06 |
Message-ID: | 200201190156.g0J1u6a07441@saturn.janwieck.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane wrote:
> Jan Wieck <janwieck(at)yahoo(dot)com> writes:
> >>> I suppose I need to recompile Postgres now on the system now that it
> >>> accepts large files.
> >>
> >> Yes.
>
> > No. PostgreSQL is totally fine with that limit, it will just
> > segment huge tables into separate files of 1G max each.
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
>
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
>
> I can envision building 32-bit-compatible stdio packages that don't
> choke on large files unless you actually try to do ftell or fseek beyond
> the 2G boundary. Solaris' implementation, however, evidently fails
> hard at the boundary.
Meaning what? That even if he'd recompile PostgreSQL to
support large files, the "pg_dump >outfile" would still choke
... duh!
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-01-19 02:04:12 | Re: large file limitation |
Previous Message | Tom Lane | 2002-01-19 01:51:47 | Re: large file limitation |