From: | Tino Wildenhain <tino(at)wildenhain(dot)de> |
---|---|
To: | "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance of count(*) |
Date: | 2007-03-22 15:31:39 |
Message-ID: | 4602A15B.7000908@wildenhain.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Craig A. James schrieb:
...
> In our case (for a variety of reasons, but this one is critical), we
> actually can't use Postgres indexing at all -- we wrote an entirely
> separate indexing system for our data, one that has the following
> properties:
>
> 1. It can give out "pages" of information (i.e. "rows 50-60") without
> rescanning the skipped pages the way "limit/offset" would.
> 2. It can give accurate estimates of the total rows that will be returned.
> 3. It can accurately estimate the time it will take.
>
Thats certainly not entirely correct. There is no need to store or
maintain this information along with postgres when you can store
and maintain it directly in postgres as well. When you have some
outside application I think I can savely assume you are doing
less updates compared to many reads to have it actually pay out.
So why not store this information in separate "index" and "statistic"
tables? You would have just to join with your real data for retrival.
On top of that, postgres has a very flexible and extensible index
system. This would mean you save on database roundtrips and
double information storage (and the sync problems you certainly
get from it)
Regards
Tino
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-03-22 15:35:36 | Re: Potential memory usage issue |
Previous Message | Andreas Kostyrka | 2007-03-22 15:17:17 | Re: Performance of count(*) |