From: | Dave Vitek <dvitek(at)grammatech(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | strategies for dealing with frequently updated tables |
Date: | 2012-02-28 00:55:39 |
Message-ID: | 4F4C260B.9060301@grammatech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi all,
I have a relation where a tuple typically undergoes a lifecycle
something like:
1) Created
2) Updated maybe thousands of times (no updates to indexed columns though)
3) Rarely or never modified again
The following query takes about 100 minutes (3 seconds per tuple):
SELECT count(id) from T
(2) is causing a lot of auto vacuum/analyze activity (which is OK). HOT
seems to get used for about 90% of the updates, but there are enough
updates that don't use it to cause issues. I'm using pg version 9.0.3
on a 32-bit windows xp machine with 3GB of RAM. The .conf file is using
default settings.
Table Info:
n_live_tup 1799
n_dead_tup 191
pg_relation_size 2343mb
indexsize 10mb
toastsize 552kb
toastindexsize 16kb
This reports under 10kb for most tuples:
psql -A -c "select * from T where id=123" | wc -c
auto-vacuum and auto-analyze both ran yesterday with default settings.
There are only one or two new tuples since yesterday. The database is
fairly old (was probably created using pg_restore about when 9.0.3 came
out).
Here is the output from VACUUM VERBOSE:
INFO: vacuuming "public.T"
INFO: scanned index "idx1" to remove 249 row versions
DETAIL: CPU 0.00s/0.00u sec elapsed 2.00 sec.
INFO: scanned index "idx2" to remove 249 row versions
DETAIL: CPU 0.00s/0.00u sec elapsed 2.12 sec.
INFO: scanned index "idx3" to remove 249 row versions
DETAIL: CPU 0.00s/0.00u sec elapsed 2.90 sec.
INFO: scanned index "idx4" to remove 249 row versions
DETAIL: CPU 0.00s/0.00u sec elapsed 2.32 sec.
INFO: scanned index "idx5" to remove 249 row versions
DETAIL: CPU 0.00s/0.00u sec elapsed 2.42 sec.
INFO: "T": removed 249 row versions in 249 pages
DETAIL: CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO: index "idx1" now contains 1976 row versions in 252 pages
DETAIL: 249 index row versions were removed.
210 index pages have been deleted, 210 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO: index "idx2" now contains 1976 row versions in 258 pages
DETAIL: 249 index row versions were removed.
209 index pages have been deleted, 209 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.07 sec.
INFO: index "idx3" now contains 1976 row versions in 259 pages
DETAIL: 249 index row versions were removed.
217 index pages have been deleted, 217 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO: index "idx4" now contains 1976 row versions in 250 pages
DETAIL: 249 index row versions were removed.
206 index pages have been deleted, 206 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.07 sec.
INFO: index "idx5" now contains 1976 row versions in 267 pages
DETAIL: 249 index row versions were removed.
217 index pages have been deleted, 217 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO: "T": found 0 removable, 1444 nonremovable row versions in 1522
out of 299964 pages
DETAIL: 30 dead row versions cannot be removed yet.
There were 10035 unused item pointers.
0 pages are entirely empty.
CPU 0.01s/0.00u sec elapsed 17.24 sec.
INFO: vacuuming "pg_toast.pg_toast_17132"
INFO: index "pg_toast_17132_index" now contains 279 row versions in 2 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO: "pg_toast_17132": found 0 removable, 279 nonremovable row
versions in 69 out of 69 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.56 sec.
VACUUM
I imagine CLUSTERing the table would make things happier, but I'm hoping
for a permanent solution that avoids periodic downtime.
One thought was to partition so rows that are still changing live in a
separate table from the more stable rows. I imagine the cardinality of
(2) rarely exceeds 10. Can I still get into performance trouble with a
table that small after enough updates? Anyone have other ideas?
- Dave
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2012-02-28 01:52:01 | Re: synchronous replication: blocking commit on the master |
Previous Message | Jameison Martin | 2012-02-28 00:53:01 | "canceling autovacuum time" |