From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Jean-Luc Lachance <jllachan(at)nsd(dot)ca> |
Cc: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "Charles H(dot) Woloszynski" <chw(at)clearmetrix(dot)com>, Richard Huxton <dev(at)archonet(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Win2K Questions |
Date: | 2002-11-08 22:28:40 |
Message-ID: | Pine.LNX.4.33.0211081524350.10943-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Only helps the case where you're getting a total count though, and
requires that there be a "count" variable for each table for each
transaction in progress, since each can have a different count. But it
doesn't help at all for
select count(*) from table where id >10000;
which is also pretty common. I think the real problem is that this is one
of those things that is quite hard to optimize in an MVCC database.
This solution may be best implemented in userland, by having a seperate
table that stores the counts of the tables you're interested in, and uses
the MVCC system to provide different counts to each transaction.
But the performance of updating that secondary table may be worse than
just running a count(*).
I doubt the black (gray??? :-) magic needed to do this will be put into
the backend of postgresql any time soon. But the userland solution is
something that could be quite useful.
On Fri, 8 Nov 2002, Jean-Luc Lachance wrote:
> Scott,
>
> You answered the question yourself. The operative keyword her is
> *before* the transaction started.
> You store the global count before the transaction. While in a
> transaction, you save the number of inserted and deleted records. When
> *all* parallel transactions are commited, you update the global count
> with the total of of updated and deleted records. If a connection start
> a new transaction before the other transactions have been
> commited you take the global count plus the adjustment from the previous
> transaction.
>
> JLL
>
> "scott.marlowe" wrote:
> >
> > but how do you handle the case where two people have two different
> > connections, and one starts a serializable transaction and adds n rows to
> > the table. For that transaction, there are x+n rows in the table, while
> > for the transaction started before his, there are only x rows. which is
> > the "right" answer?
> >
> > On Fri, 8 Nov 2002, Jean-Luc Lachance wrote:
> >
> > > Here is a suggestion.
> > >
> > > When a count(*) is computed (for all records) store that value and
> > > unvalidate it if there is a later insert or delete on the table. Next
> > > improvement would be to maintain a count per active transaction.
> > >
> > > Bruce Momjian wrote:
> > > >
> > > > Charles H. Woloszynski wrote:
> > > > >
> > > > >
> > > > > Richard Huxton wrote:
> > > > >
> > > > > >Depends on usage patterns and how you build your application. There are a
> > > > > >couple of oddities with workarounds: count() and max() aren't very optimised
> > > > > >for example.
> > > > > >
> > > > > You can 'fix' the max() SNAFU with a new query of the form
> > > > > "select field from tbl limit 1 order by field desc" (not precise
> > > > > syntax, but the idea is correct)
> > > > >
> > > > > I call it a SNAFU since it I hate to have to change queries from
> > > > > something obvious to a more obscure format just to work around
> > > > > an optimizer issue.
> > > > >
> > > > > Not sure if there is an equivalent query to make count() work
> > > > > faster
> > > >
> > > > The problem with optimizing COUNT() is that different backends have
> > > > different tuple views, meaning the count from one backend could be
> > > > different than from another backend. I can't see how to optimize that.
> > > > Does oracle do it? Maybe by looking their redo segements. We don't
> > > > have those because redo is stored in the main table.
> > > >
> > > > --
> > > > Bruce Momjian | http://candle.pha.pa.us
> > > > pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
> > > > + If your life is a hard drive, | 13 Roberts Road
> > > > + Christ can be your backup. | Newtown Square, Pennsylvania 19073
> > > >
> > > > ---------------------------(end of broadcast)---------------------------
> > > > TIP 6: Have you searched our list archives?
> > > >
> > > > http://archives.postgresql.org
> > >
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 3: if posting/reading through Usenet, please send an appropriate
> > > subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> > > message can get through to the mailing list cleanly
> > >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 6: Have you searched our list archives?
> >
> > http://archives.postgresql.org
>
From | Date | Subject | |
---|---|---|---|
Next Message | Benjamin Scherrey | 2002-11-08 22:54:58 | Re: Configuring postgresql build to handle long names |
Previous Message | Vidal Salem Zebulum | 2002-11-08 22:23:33 | Re: Hardware estimation |