From: | Scott Carey <scott(at)richrelevance(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Robert Schnabel <schnabelr(at)missouri(dot)edu>, "david(at)lang(dot)hm" <david(at)lang(dot)hm>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: How to best use 32 15k.7 300GB drives? |
Date: | 2011-01-28 17:50:47 |
Message-ID: | C9684183.1E40B%scott@richrelevance.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 1/28/11 9:28 AM, "Stephen Frost" <sfrost(at)snowman(dot)net> wrote:
>* Scott Marlowe (scott(dot)marlowe(at)gmail(dot)com) wrote:
>> There's nothing wrong with whole table updates as part of an import
>> process, you just have to know to "clean up" after you're done, and
>> regular vacuum can't fix this issue, only vacuum full or reindex or
>> cluster.
>
>Just to share my experiences- I've found that creating a new table and
>inserting into it is actually faster than doing full-table updates, if
>that's an option for you.
I wonder if postgres could automatically optimize that, if it thought that
it was going to update more than X% of a table, and HOT was not going to
help, then just create a new table file for XID's = or higher than the one
making the change, and leave the old one for old XIDs, then regular VACUUM
could toss out the old one if no more transactions could see it.
>
> Thanks,
>
> Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Mladen Gogala | 2011-01-28 18:46:09 | Re: FW: Queries becoming slow under heavy load |
Previous Message | Scott Carey | 2011-01-28 17:47:55 | Re: How to best use 32 15k.7 300GB drives? |