From: | "karsten vennemann" <karsten(at)terragis(dot)net> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: dump of 700 GB database |
Date: | 2010-02-17 23:11:44 |
Message-ID: | E27976924EA445D3A387FEC2C72D7D8C@snuggie |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> Note that cluster on a randomly ordered large table can be
> prohibitively slow, and it might be better to schedule a
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table
> tablename like old_tablename; insert into tablename select *
> from old_tablename order by clustered_col1, clustered_col2;
That sounds like a great idea if that saves time.
>> (creating and moving over FK references as needed.)
>> shared_buffers=160MB, effective_cache_size=1GB,
>> maintenance_work_mem=500MB, wal_buffers=16MB,
>> checkpoint_segments=100
> What's work_mem set to?
work_mem = 32MB
> What ubuntu? 64 or 32 bit?
Its a 32 bit. I dont know if 4GB files doesn't sound to small of a dump
for originally 350GB big db - nor why pg_restore fails...
> Have you got either a file
> system or a set of pg tools limited to 4Gig file size?
Not sure what is the problem on my server - I'm trying to figure out what
has pg_restore fail...
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2010-02-17 23:41:50 | Re: tuning bgwriter in 8.4.2 |
Previous Message | Scott Marlowe | 2010-02-17 22:59:31 | Re: dump of 700 GB database |