From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | John Naylor <john(dot)naylor(at)enterprisedb(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [RFC] speed up count(*) |
Date: | 2021-10-21 20:29:09 |
Message-ID: | 4dc52950-06af-9a7c-1d48-e28909f815ec@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 10/21/21 4:23 PM, Robert Haas wrote:
> On Thu, Oct 21, 2021 at 4:19 PM Joe Conway <mail(at)joeconway(dot)com> wrote:
>> That is a grossly overstated position. When I have looked, it is often
>> not that terribly far off. And for many use cases that I have heard of
>> at least, quite adequate.
>
> I don't think it's grossly overstated. If you need an approximation it
> may be good enough, but I don't think it will often be exactly correct
> - probably only if the table is small and rarely modified.
meh -- the people who expect this to be impossibly fast don't typically
need or expect it to be exactly correct, and there is no way to make it
"exactly correct" in someone's snapshot without doing all the work.
That is why I didn't suggest making it the default. If you flip the
switch, you would get a very fast approximation. If you care about
accuracy, you accept it has to be slow.
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2021-10-21 20:51:49 | Re: [RFC] speed up count(*) |
Previous Message | Stephen Frost | 2021-10-21 20:28:59 | Re: parallelizing the archiver |