Re: pg_dump and large files - is this a problem?

From: "Mario Weilguni" <mario(dot)weilguni(at)icomedias(dot)com>
To: "Philip Warner" <pjw(at)rhyme(dot)com(dot)au>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "PostgreSQL Development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump and large files - is this a problem?
Date: 2002-10-03 13:18:44
Message-ID: 4D618F6493CE064A844A5D496733D667039106@freedom.icomedias.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>My limited reading of off_t stuff now suggests that it would be brave to
>assume it is even a simple 64 bit number (or even 3 32 bit numbers). One
>alternative, which I am not terribly fond of, is to have pg_dump write
>multiple files - when we get to 1 or 2GB, we just open another file, and
>record our file positions as a (file number, file position) pair. Low tech,
>but at least we know it would work.
>
>Unless anyone knows of a documented way to get 64 bit uint/int file
>offsets, I don't see we have mush choice.

How common is fgetpos64? Linux supports it, but I don't know about other
systems.

http://hpc.uky.edu/cgi-bin/man.cgi?section=all&topic=fgetpos64

Regards,
Mario Weilguni

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2002-10-03 13:35:14 Re: v7.2.3 - tag'd, packaged ... need it checked ...
Previous Message Philip Warner 2002-10-03 13:10:48 Re: pg_dump and large files - is this a problem?