From: | "Jeremy Haile" <jhaile(at)fastmail(dot)fm> |
---|---|
To: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: High inserts, bulk deletes - autovacuum vs scheduled |
Date: | 2007-01-10 21:48:42 |
Message-ID: | 1168465722.17889.1168630713@webmail.messagingengine.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> BTW, that's the default values for analyze... the defaults for vacuum
> are 2x that.
Yeah - I was actually more concerned that tables would need to be
analyzed more often than I was about vacuuming too often, so I used
analyze as the example. Since my app is inserting constantly throughout
the day and querying for "recent" data - I want to make sure the query
planner realizes that there are lots of rows with new timestamps on
them. In other words, if I run a query "select * from mytable where
timestamp > '9:00am'" - I want to make sure it hasn't been a day since
the table was analyzed, so the planner thinks there are zero rows
greater than 9:00am today.
> What's more important
> is to make sure critical tables (such as queue tables) are getting
> vacuumed frequently so that they stay small.
Is the best way to do that usually to lower the scale factors? Is it
ever a good approach to lower the scale factor to zero and just set the
thresholds to a pure number of rows? (when setting it for a specific
table)
Thanks,
Jeremy Haile
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2007-01-10 21:51:55 | Re: Partitioning |
Previous Message | Steven Flatt | 2007-01-10 21:39:06 | Re: table partioning performance |