From: | dracula007(at)atlas(dot)cz |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: **SPAM** Faster count(*)? |
Date: | 2005-08-09 23:29:58 |
Message-ID: | 196456155.20050810012958@karneval.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
I believe running count(*) means fulltable scan, and there's no way
to do it without it. But what about some "intermediate" table, with
the necessary counts?
That means to create a table with values (counts) you need, and on
every insert/delete/update increment or decrement the appropriate
values. This way you won't need the count(*) query anymore, and the
performance should be much better.
t.v.
> Salve.
> I understand from various web searches and so on that PostgreSQL's MVCC
> mechanism makes it very hard to use indices or table metadata to optimise
> count(*). Is there a better way to guess the "approximate size" of a table?
> I'm trying to write a trigger that fires on insert and performs some
> maintenance (collapsing overlapping boxes into a single large box,
> specifically) as the table grows. My initial attempt involved count(*) and,
> as the number of pages in the table grew, that trigger bogged down the
> database.
> Any thoughts?
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-08-10 02:49:14 | Re: **SPAM** Faster count(*)? |
Previous Message | Owen Jacobson | 2005-08-09 22:39:33 | Faster count(*)? |