| From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
|---|---|
| To: | "Alan Hodgson" <ahodgson(at)simkin(dot)ca> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: "Healing" a table after massive updates |
| Date: | 2008-09-11 18:55:16 |
| Message-ID: | dcc563d10809111155r27b3b3a8w9e86368ba6020b32@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Thu, Sep 11, 2008 at 11:15 AM, Alan Hodgson <ahodgson(at)simkin(dot)ca> wrote:
> On Thursday 11 September 2008, "Gauthier, Dave" <dave(dot)gauthier(at)intel(dot)com>
> wrote:
>> I have a job that loads a large table, but then has to "update" about
>> half the records for various reasons. My perception of what happens on
>> update for a particular recors is...
>>
>> - a new record will be inserted with the updated value(s).
>>
>> - The old record is marked as being obselete.
>>
>
> What you might consider doing is loading the data into a temp table,
> updating it there, then copying that data into the final destination.
> Depending on the indexes involved, you might even find this to be faster.
Especially if you can drop then recreate them on the real table before
reimporting them to it.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2008-09-11 18:58:42 | Re: index on id and created_at |
| Previous Message | Alan Hodgson | 2008-09-11 17:15:28 | Re: "Healing" a table after massive updates |