From: | Rod Taylor <rbt(at)zort(dot)ca> |
---|---|
To: | PostgreSQL general list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] []performance issues |
Date: | 2002-08-02 18:08:02 |
Message-ID: | 1028311682.10895.27.camel@jester |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
On Fri, 2002-08-02 at 11:39, Andrew Sullivan wrote:
> On Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote:
> >
> > So I am still interested in PostgreSQL's ability to deal with
> > multimillon records tables.
>
> [x-posted and Reply-To: to -general; this isn't a development
> problem.]
>
> We have tables with multimillion records, and they are fast. But not
> fast to count(). The MVCC design of PostgreSQL will give you very
> few concurerncy problems, but you pay for that in the response time
> of certain kinds of aggregates, which cannot use an index.
Of course, as suggested this is easily overcome by keeping your own c
counter.
begin;
insert into bigtable values ();
update into counttable set count=count+1;
commit;
Now you get all the fun concurrency issues -- but fetching the
information will be quick. What happens more, the counts, or the
inserts :)
From | Date | Subject | |
---|---|---|---|
Next Message | Nigel J. Andrews | 2002-08-02 18:09:54 | Re: System catalog and identifying |
Previous Message | Andrew Sullivan | 2002-08-02 18:02:49 | Re: MySQL or Postgres ? |
From | Date | Subject | |
---|---|---|---|
Next Message | Rod Taylor | 2002-08-02 18:09:58 | Re: Open 7.3 items |
Previous Message | Andrew Sullivan | 2002-08-02 18:05:56 | Re: FUNC_MAX_ARGS benchmarks |