From: | Jeremiah Jahn <jeremiah(at)cs(dot)earlham(dot)edu> |
---|---|
To: | postgres list <pgsql-general(at)postgresql(dot)org> |
Subject: | pg_dump 2 gig file size limit on ext3 |
Date: | 2002-12-06 06:06:25 |
Message-ID: | 1039154784.7905.138.camel@bluejay.goodinassociates.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I have the strangest thing happening. I can't finish a pg_dump of my db.
It says that I have reached the maximum file size @ 2GB. I'm running
this on a system with redhat 8.0 because the problem existed on 7.3 as
well on an ext3 raid array. the size of the db is +/- 4gig. I'm using
7.2.2, I tried 7.2.1 earlier today and got the same problem. I don't
think I can really do the split the data in different tables since I use
large objects. Any1 out there have any ideas as why this is happening.
I took the 2 gig dump and recopied it into itself just for to see what
would happen and the resulting 4.2 gig file was fine. This really seems
to be a problem with pg_dump. I've used pg_dump with -Ft just crashes
with some sort of "filed to write error tried to write 221 of 256" or
something like that. the resulting size though is about 1.2 gig. -Fc
stops at the 2 gig limit. Do I need to recompile this with some 64bit
setting or something..? I'm currently using the default redhat build.
thanx for any ideas,
-jj-
--
I hope you're not pretending to be evil while secretly being good.
That would be dishonest.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew | 2002-12-06 06:09:18 | Asking for password at startup |
Previous Message | Robert Treat | 2002-12-06 04:13:39 | Re: [GENERAL] PostgreSQL Global Development Group |