From: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org> |
---|---|
To: | Kishore B <kishorebh(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Need help in setting optimal configuration for a huge |
Date: | 2005-10-23 10:04:07 |
Message-ID: | Pine.LNX.4.44.0510231200490.11189-100000@zigo.dhs.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-performance |
On Sun, 23 Oct 2005, Kishore B wrote:
> We need to insert into the bigger table almost for every second , through
> out the life time. In addition, we receive at least 200,000 records a day at
> a fixed time.
>
> We are facing a* critical situation because of the performance of the **
> database**.* Even a basic query like select count(*) from bigger_table is
> taking about 4 minutes to return.
Count(*) like that always scans the full table, but 4 minutes still sound
like a lot. How often do you vacuum? Could it be that the disk is full of
garbage due to not enough vacuum?
A query like this can help find bloat:
SELECT oid::regclass, reltuples, relpages FROM pg_class ORDER BY 3 DESC;
I assume to do updates and deletes as well, and not just inserts?
--
/Dennis Björklund
From | Date | Subject | |
---|---|---|---|
Next Message | sualeh.fatehi | 2005-10-23 16:06:23 | Re: Monitoring database for changes - backup purposes |
Previous Message | Kishore B | 2005-10-23 02:05:50 | Re: Need help in setting optimal configuration for a huge database. |
From | Date | Subject | |
---|---|---|---|
Next Message | Steinar H. Gunderson | 2005-10-23 10:23:33 | Re: Materializing a sequential scan |
Previous Message | Bruno Wolff III | 2005-10-23 06:51:36 | Re: prepared transactions that persist across sessions? |