From: | Himanshu Baweja <himanshubaweja(at)yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Richard Huxton <dev(at)archonet(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Stats not getting updated.... |
Date: | 2005-06-02 16:07:12 |
Message-ID: | 20050602160712.68456.qmail@web51009.mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> backends only ship stats to the collector at transaction commit.
> Or maybe it's at the end of processing a client command. It's certainly
> not continuous.
yup that i already know....
but is there any way to make it do the update more frequently.... 4 times in 30 mins... which makes the stats useless....
or there is any way to identify the usage of tables... wht i am trying to do is check the table heap_blks_read time to time.... so that i can know how much io is getting used for each table... and during which time... i am doing sampling every 2 min....
now after i have identified which tables are getting used and when... we can move them to diff partitions for better preformance.....
is there any way to know the table usage....
thx
Himanshu
---------------------------------
Do you Yahoo!?
Yahoo! Mail - You care about security. So do we.
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Lubratt | 2005-06-02 16:12:27 | Re: SRFs returning records from a view |
Previous Message | Tom Lane | 2005-06-02 16:06:38 | Re: Using pg_dump in a cron |