From: | fork <forkandwait(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Tuning massive UPDATES and GROUP BY's? |
Date: | 2011-03-10 18:04:39 |
Message-ID: | loom.20110310T185007-149@post.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Merlin Moncure <mmoncure <at> gmail.com> writes:
> > I am loathe to create a new table from a select, since the indexes themselves
> > take a really long time to build.
>
> you are aware that updating the field for the entire table, especially
> if there is an index on it (or any field being updated), will cause
> all your indexes to be rebuilt anyways? when you update a record, it
> gets a new position in the table, and a new index entry with that
> position.
> insert/select to temp, + truncate + insert/select back is
> usually going to be faster and will save you the reindex/cluster.
> otoh, if you have foreign keys it can be a headache.
Hmph. I guess I will have to find a way to automate it, since there will be a
lot of times I want to do this.
> > As the title alludes, I will also be doing GROUP BY's on the data, and would
> > love to speed these up, mostly just for my own impatience...
>
> need to see the query here to see if you can make them go faster.
I guess I was hoping for a blog entry on general guidelines given a DB that is
really only for batch analysis versus transaction processing. Like "put all
your temp tables on a different disk" or whatever. I will post specifics later.
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2011-03-10 21:14:50 | Re: unexpected stable function behavior |
Previous Message | Merlin Moncure | 2011-03-10 18:02:15 | Re: Basic performance tuning on dedicated server |