From: | Holger Marzen <holger(at)marzen(dot)de> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: ShmemAlloc errors |
Date: | 2003-10-20 10:00:12 |
Message-ID: | Pine.LNX.4.58.0310201158210.19386@bluebell.marzen.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, 19 Oct 2003, Greg Stark wrote:
>
> Holger Marzen <holger(at)marzen(dot)de> writes:
>
> > I use PostgreSQL for counting network traffic, I use a sample every five
> > minutes. Because my queries became too slow I simply added another table
> > that holds the data per day. Every day, yesterday's data get added,
> > inserted into the "day"-table and deleted from the 5-minutes-table. I
> > don't need the 5 minutes accuracy for all of the data.
>
> The original poster said he needed the 5 minute data.
Yes, but how long? Really for months? The above way of compressing data
can be altered, e.g. he could keep the 5 minutes data for a week or a
month and use the daily data for billing.
> However, perhaps a combination could be a good compromise. We used to keep raw
> one-record-per-hit data in a table and queried that for statistics. Later we
> aggregated the data once per hour but kept the raw data as well. The reports
> used the aggregate data for speed but the raw data was still available for
> debugging or auditing.
Yes, exactly.
> This was very handy when the database became too large, we started purging the
> raw data after 30 days but the reports were all still fine as we could keep
> the aggregate data indefinitely.
Yup.
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
From | Date | Subject | |
---|---|---|---|
Next Message | Nick Burrett | 2003-10-20 10:07:20 | Re: Recomended FS |
Previous Message | Shridhar Daithankar | 2003-10-20 09:57:27 | Re: Recomended FS |