From: | Cory Tucker <cory(dot)tucker(at)gmail(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Most effective settings for deleting lots of data? |
Date: | 2015-08-26 20:25:02 |
Message-ID: | CAG_=8kBUJsbe1AsUhYgknSSieWE94jZd-9wSpidmq0N_FPw+SA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi, I am using postgres 9.3 and am preparing to migrate to 9.4. As part of
the migration, I was hoping to also delete a bunch of data that is no
longer needed (100M+ rows across several tables).
I can fairly trivially delete the data by doing a simple statement like
this:
DELETE FROM account WHERE id = 1234;
All the foreign keys are setup to have "ON DELETE CASCADE", so this will
effectively drop the entire graph, which is what I want. My desire is to
do this as quickly as possible. The first thing I have done is to disable
autovacuuming on all of the largest tables so that the entire system
doesn't lock up, like so:
ALTER TABLE my_data SET (autovacuum_enabled = false,
toast.autovacuum_enabled = false);
I was wondering what other settings might be best be applied? For
instance, I was thinking that "work_mem" could be bumped up to some
ridiculously large value if needed. I have the benefit of being able to
have exclusive access to the database at the time that this operation needs
to happen so most options are on the table.
What settings would you recommend? Also, it just occurred to me that I
should try to disable/drop all indexes (especially since they will be
recreated) later so that those are not updated in the process.
thanks in advance,
--Cory
From | Date | Subject | |
---|---|---|---|
Next Message | Alan Hodgson | 2015-08-26 20:34:04 | Re: Most effective settings for deleting lots of data? |
Previous Message | Gavin Flower | 2015-08-26 19:13:33 | Re: PostgreSQL Developer Best Practices |