From: | "Dann Corbit" <DCorbit(at)connx(dot)com> |
---|---|
To: | "Wes" <wespvp(at)syntegra(dot)com>, "Postgres general mailing list" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] Much Ado About COUNT(*) |
Date: | 2005-01-14 23:11:32 |
Message-ID: | D425483C2C5C9F49B5B7A41F894415470557DF@postal.corporate.connx.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
A cardinality estimate function might be nice.
SELECT cardinality_estimate(table_name)
If it is off by 25% then no big deal.
It would be useful for the PostgreSQL query planner also, I imagine.
-----Original Message-----
From: pgsql-general-owner(at)postgresql(dot)org
[mailto:pgsql-general-owner(at)postgresql(dot)org] On Behalf Of Wes
Sent: Friday, January 14, 2005 2:59 PM
To: Postgres general mailing list
Subject: Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
On 1/14/05 12:47 PM, "Frank D. Engel, Jr." <fde101(at)fjrhome(dot)net> wrote:
> It's probably too messy to be worthwhile this
> way, though. More trouble than it would be worth.
It would be rather useful if there was a way to get a reasonably
accurate
count (better than analyze provides) in a very short period. When
you've
got a relatively wide table that has hundreds of millions to over a
billion
rows, and you need to report on how many rows in the table, that can
take a
long time.
Wes
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings
From | Date | Subject | |
---|---|---|---|
Next Message | Bo Lorentsen | 2005-01-14 23:13:53 | Re: OID Usage |
Previous Message | Bo Lorentsen | 2005-01-14 23:10:28 | Re: OID Usage |