From: | Ivan Voras <ivoras(at)geri(dot)cc(dot)fer(dot)hr> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: what is the maximum number of rows in a table in postgresql 8.1 |
Date: | 2008-03-25 13:16:43 |
Message-ID: | fsatvr$poa$1@ger.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
sathiya psql wrote:
> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
> QUERY
> PLAN
> ------------------------------------
> ----------------------------------------------------------------------------------------------
> Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual
> time=6069.373..6069.374 rows=1 loops=1)
> -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119
> width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
> Total runtime: 6069.553 ms
> (3 rows)
You will never get good performance automatically with COUNT(*) in
PostgreSQL. You can either create your own infrastructure (triggers,
statistics tables, etc) or use an approximate result like this:
CREATE OR REPLACE FUNCTION fcount(varchar) RETURNS bigint AS $$
SELECT reltuples::bigint FROM pg_class WHERE relname=$1;
$$ LANGUAGE 'sql';
Use the above function as:
SELECT fcount('table_name');
fcount
--------
7412
(1 row)
From | Date | Subject | |
---|---|---|---|
Next Message | sathiya psql | 2008-03-25 13:27:12 | Re: postgresql is slow with larger table even it is in RAM |
Previous Message | Bill Moran | 2008-03-25 13:10:46 | Re: postgresql is slow with larger table even it is in RAM |