From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | mark(at)mark(dot)mielke(dot)cc |
Cc: | Richard Huxton <dev(at)archonet(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Improving count(*) |
Date: | 2005-11-22 23:11:01 |
Message-ID: | 200511222311.jAMNB1Y00922@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
mark(at)mark(dot)mielke(dot)cc wrote:
> On Fri, Nov 18, 2005 at 03:46:42PM +0000, Richard Huxton wrote:
> > Simon Riggs wrote:
> > >One of the major complaints is always "Select count(*) is slow".
> > Although there seem to have been plenty of ideas on this they all seem
> > to just provide a solution for the "whole table" case. It might be that
> > the solution provides other benefits, but for this one case it does seem
> > like a lot of work.
>
> Or, it wasn't explained properly as to how the WHERE clause would
> function.
>
> The solution involving an index that has visibility information should
> work fine with a WHERE clause. Only index rows that match the clause
> are counted.
>
> A solution enhancing the above mentioned indexes, to maintain a count
> for whole index blocks, would allow whole index blocks that satisfy
> the WHERE clause to be counted, assuming the whole index block is
> visible in the current transaction.
I think it would be very difficult to generate a per-index-page
visibility bit because I can't think of a normal database operation that
would allow us to update it. It requires that an index scan visit every
heap page to check the visibility of the tuples. However, we almost
never do a full-index scan because it is so much slower than a heap
scan. It would be easy to keep a heap-visible bit up-to-date (because
we do full-heap scans occasionally), but that would require the index
to load the heap page to find the bit. (The bit isn't on the index, it
is on the heap).
Jan has been talking about have a bitmap to track pages that need
vacuuming, and I am wondering if the same system could be used to track
the heap-dirty bits. Putting one bit on every 8k disk page means we have
to load the 8k page to read the bit, while a centralized bitmap would
load 64k page bits in a single 8k page. That one page would cover 500MB
of a table. Seems vacuum could use the same bitmap values.
Assuming we track 100 tables regularly, that would require 800k of shared
memory. We would have to pagein/out the bitmaps as needed, but we could
throw them away on a crash and rebuild as part of normal operation.
FSM has not been a concurrency bottleneck, so I am thinking this would
not be either.
I suppose it would require a new filesystem file for each table.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2005-11-22 23:16:47 | Re: Improving count(*) |
Previous Message | Bruce Momjian | 2005-11-22 22:53:56 | Re: Improving count(*) |