From: | "Y Sidhu" <ysidhu(at)gmail(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <decibel(at)decibel(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pg_stats how-to? |
Date: | 2007-05-18 23:26:05 |
Message-ID: | b09064f30705181626u46fd68fu6a98be475734c20f@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/15/07, Jim C. Nasby <decibel(at)decibel(dot)org> wrote:
>
> On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:
> > "Y Sidhu" <ysidhu(at)gmail(dot)com> writes:
> > > it may be table fragmentation. What kind of tables? We have 2 of them
> which
> > > experience lots of adds and deletes only. No updates. So a typical day
> > > experiences record adds a few dozen times on the order of 2.5 million.
> And
> > > deletes once daily. Each of these tables has about 3 btree indexes.
> >
> > With an arrangement like that you should vacuum once daily, shortly
> > after the deletes --- there's really no point in doing it on any other
> > schedule. Note "shortly" not "immediately" --- you want to be sure that
> > any transactions old enough to see the deleted rows have ended.
>
> Also, think about ways you might avoid the deletes altogether. Could you
> do a truncate instead? Could you use partitioning? If you are using
> deletes then look at CLUSTERing the table some time after the deletes
> (but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).
>
> To answer your original question, a way to take a look at how bloated
> your tables are would be to ANALYZE, divide reltuples by relpages from
> pg_class (gives how many rows per page you have) and compare that to 8k
> / average row size. The average row size for table rows would be the sum
> of avg_width from pg_stats for the table + 24 bytes overhead. For
> indexes, it would be the sum of avg_width for all fields in the index
> plus some overhead (8 bytes, I think).
>
> An even simpler alternative would be to install contrib/pgstattuple and
> use the pgstattuple function, though IIRC that does read the entire
> relation from disk.
> --
> Jim Nasby decibel(at)decibel(dot)org
> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
>
Here are my results:
a. SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;
I get 66
b. SELECT (8000/(sum(avg_width)+24)) as table_stat FROM pg_stats;
I get 1
Yudhvir
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2007-05-19 05:00:00 | Re: 121+ million record table perf problems |
Previous Message | Alvaro Herrera | 2007-05-18 23:20:52 | Re: 121+ million record table perf problems |