Re: pg_dump and large files - is this a problem?

From: Peter Eisentraut <peter_e(at)gmx(dot)net>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: Philip Warner <pjw(at)rhyme(dot)com(dot)au>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Giles Lean <giles(at)nemeton(dot)com(dot)au>
Subject: Re: pg_dump and large files - is this a problem?
Date: 2002-10-23 21:50:34
Message-ID: Pine.LNX.4.44.0210231933410.928-100000@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Bruce Momjian writes:

> I think you are right that we have to not use off_t and use long if we
> can't find a proper 64-bit seek function, but what are the failure modes
> of doing this? Exactly what happens for larger files?

First we need to decide what we want to happen and after that think about
how to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(),
we have the following options:

1. Disable access to large files.

2. Seek in some other way.

What's it gonna be?

--
Peter Eisentraut peter_e(at)gmx(dot)net

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2002-10-23 21:50:50 Re: pg_dump and large files - is this a problem?
Previous Message Neil Conway 2002-10-23 21:35:49 Re: PREPARE / EXECUTE