Re: large file limitation

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jan Wieck <janwieck(at)yahoo(dot)com>
Cc: Andrew Sullivan <andrew(at)libertyrms(dot)info>, Jeff <jeff(dot)brickley(at)motorola(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: large file limitation
Date: 2002-01-19 01:51:47
Message-ID: 11359.1011405107@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Jan Wieck <janwieck(at)yahoo(dot)com> writes:
>>> I suppose I need to recompile Postgres now on the system now that it
>>> accepts large files.
>>
>> Yes.

> No. PostgreSQL is totally fine with that limit, it will just
> segment huge tables into separate files of 1G max each.

The backend is fine with it, but "pg_dump >outfile" will choke when
it gets past 2Gb of output (at least, that is true on Solaris).

I imagine "pg_dump | split" would do as a workaround, but don't have
a Solaris box handy to verify.

I can envision building 32-bit-compatible stdio packages that don't
choke on large files unless you actually try to do ftell or fseek beyond
the 2G boundary. Solaris' implementation, however, evidently fails
hard at the boundary.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jan Wieck 2002-01-19 01:56:06 Re: large file limitation
Previous Message Jan Wieck 2002-01-19 01:06:32 Re: large file limitation