From: | "Gordon A(dot) Runkle" <gar(at)no-spam-integrated-dynamics(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | RE: 7.1b6 - pg_xlog filled fs, postmaster won't start |
Date: | 2001-03-21 20:24:24 |
Message-ID: | 99b2f0$2832$1@news.tht.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In article
<8F4C99C66D04D4118F580090272A7A234D3340(at)sectorbase1(dot)sectorbase(dot)com>,
"Mikheev, Vadim" <vmikheev(at)sectorbase(dot)com> wrote:
>> Is it OK to delete the files from pg_xlog? What will be the result?
> It's not Ok. Though you could remove files numbered from 000000000000000
> to 0000000000012 (in hex), if any.
OK, thanks. Is there any documentation on these files, and what
our options are if something like this happens?
>> Will I be able to avoid this problem by splitting the load data into
>> multiple files?
> Yes if you'll run CHECKPOINT command between COPY-s. You could also
> move logs to another FS. Vadim
I have the logs in /home/pgsqldata, and created another
location in /home2/pgsqldata for the database. Still
managed to fill it up. It's a *big* file.
With other RDBMS products I use, DB2 and Sybase, there
are options in the import/load/bcp utilities which commit
every n records, selectable by the user. I think having
a feature like this in COPY would greatly facilitate
data migrations (which is what I'm doing, and the reason
for such a big file). What do you think?
Thanks,
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond
From | Date | Subject | |
---|---|---|---|
Next Message | Ned Lilly | 2001-03-21 20:38:42 | hang on (was: New Book: PostgreSQL: The Elephant Never Forgets) |
Previous Message | Vince Vielhaber | 2001-03-21 20:17:32 | Re: New Book: PostgreSQL: The Elephant Never Forgets |