From: | dave crane <lists(at)slipt(dot)net> |
---|---|
To: | PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | number of tables limited over time (not simultaneous)? |
Date: | 2007-02-21 03:02:36 |
Message-ID: | 45DBB64C.1030908@slipt.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
We've settled upon a method for gathering raw statistics from widely
scattered data centers of creating one sequence per-event, per minute.
Each process (some lapp, some shell, some python, some perl etc) can
call a shell script which calls ssh->psql to execute a nextval('event')
sequence. Periodically (every 2-10 minutes, depending on other factors)
Another process picks up the value and inserts it into a permanent home.
We're only talking a few 7-10k calls per minute, but going to this from
a query that does an update has saved a *huge* amount of overhead.
If I needed to a periodic dump and restore would only take a minute.
This data is highly transient. More frequently than biweekly or so
would be annoying though.
Aside from security concerns, did we miss something? Should I be
worried we're going through ~60,000 sequences per day?
TIA,
dave
From | Date | Subject | |
---|---|---|---|
Next Message | Paul Lambert | 2007-02-21 03:11:07 | Error upgrading on W2K |
Previous Message | Jim Nasby | 2007-02-21 02:51:19 | Re: Out of memory on vacuum analyze |