From: | "Johnson, Shaunn" <SJohnson6(at)bcbsm(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: file size issue? |
Date: | 2002-03-25 21:14:37 |
Message-ID: | 73309C2FDD95D11192E60008C7B1D5BB0452E14C@snt452.corp.bcbsm.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
--I think you've answered at least 1/2 of my question,
Andrew.
--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' - it will continue until it is done, splitting each
table on 1Gig increments).
--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?
--And if it's an OS (or any other) problem, how can I factor
out Postgres?
--Thanks!
-X
[snip]
> Has anyone seen if it is a problem with the OS or with the way
> Postgres handles large files (or, if I should compile it again
> with some new options).
What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
were vacuuming.
A
[snip]
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2002-03-25 21:20:04 | Re: accessing fully qualified fields in records in PLPGSQL? |
Previous Message | Andrew Sullivan | 2002-03-25 20:53:16 | Archives (again) |