From: | "Greg Sabino Mullane" <greg(at)turnstep(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Massive table (500M rows) update nightmare |
Date: | 2010-01-07 17:56:16 |
Message-ID: | e1855b4d57e4dd0a33302860acb4af5c@biglumber.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
> If it is possible to lock this audit table exclusively (may be during
> off peak hours) I would look into
> - create new_audit_table as select col1, col2, col3 ... col9,
> 'new_col_value' from old_audit_table;
> - create all indexes
> - drop old_audit_table
> - rename new_audit_table to old_audit_table
This is a good approach, but you don't necessarily have to exclusively
lock the table. Only allowing reads would be enough, or you could
install a trigger to keep track of which rows were updated. Then
the process becomes:
1. Create trigger on the old table to store changed pks
2. Create new_audit_table as select col1, col2, col3 ... col9,
'new_col_value' from old_audit_table;
3. Create all indexes on the new table
4. Stop your app from writing to the old table
5. COPY over the rows that have changed
6. Rename the old table to something else (for safety)
7. Rename the new table to the real name
8. Drop the old table when all is good
- --
Greg Sabino Mullane greg(at)turnstep(dot)com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201001071253
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktGIC0ACgkQvJuQZxSWSshEAQCfRT3PsQyWCOBXGW1XRAB814df
pJUAoMuAJoOKho39opoHq/d1J9NprGlH
=htaE
-----END PGP SIGNATURE-----
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2010-01-07 17:56:37 | Re: Massive table (500M rows) update nightmare |
Previous Message | Carlo Stonebanks | 2010-01-07 17:49:54 | Re: Massive table (500M rows) update nightmare |