From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | jao(at)geophile(dot)com |
Cc: | Scott Marlowe <smarlowe(at)qwest(dot)net>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Postgresql vs. aggregates |
Date: | 2004-06-10 07:03:25 |
Message-ID: | 40C807BD.6080905@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
jao(at)geophile(dot)com wrote:
> But that raises an interesting idea. Suppose that instead of one
> summary row, I had, let's say, 1000. When my application creates
> an object, I choose one summary row at random (or round-robin) and update
> it. So now, instead of one row with many versions, I have 1000 with 1000x
> fewer versions each. When I want object counts and sizes, I'd sum up across
> the 1000 summary rows. Would that allow me to maintain performance
> for summary updates with less frequent vacuuming?
Perhaps the simplest approach might be to define the summary table as
containing a SERIAL and your count.
Every time you add another object insert (nextval(...), 1)
Every 10s summarise the table (i.e. replace 10 rows all "scored" 1 with
1 row scored 10)
Use sum() over the much smaller table to find your total.
Vacuum regularly.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Hallgren | 2004-06-10 07:10:52 | How to tell when postmaster is ready |
Previous Message | Tom Lane | 2004-06-10 06:03:35 | Re: pg_dump and schema namespace notes |