From: | Mark Cotner <mcotner(at)yahoo(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | sustained update load of 1-2k/sec |
Date: | 2005-08-19 08:24:04 |
Message-ID: | 20050819082404.53751.qmail@web32915.mail.mud.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I bet you get tired of the same ole questions over and
over.
I'm currently working on an application that will poll
thousands of cable modems per minute and I would like
to use PostgreSQL to maintain state between polls of
each device. This requires a very heavy amount of
updates in place on a reasonably large table(100k-500k
rows, ~7 columns mostly integers/bigint). Each row
will be refreshed every 15 minutes, or at least that's
how fast I can poll via SNMP. I hope I can tune the
DB to keep up.
The app is threaded and will likely have well over 100
concurrent db connections. Temp tables for storage
aren't a preferred option since this is designed to be
a shared nothing approach and I will likely have
several polling processes.
Here are some of my assumptions so far . . .
HUGE WAL
Vacuum hourly if not more often
I'm getting 1700tx/sec from MySQL and I would REALLY
prefer to use PG. I don't need to match the number,
just get close.
Is there a global temp table option? In memory tables
would be very beneficial in this case. I could just
flush it to disk occasionally with an insert into blah
select from memory table.
Any help or creative alternatives would be greatly
appreciated. :)
'njoy,
Mark
--
Writing software requires an intelligent person,
creating functional art requires an artist.
-- Unknown
From | Date | Subject | |
---|---|---|---|
Next Message | Bob Ippolito | 2005-08-19 09:09:27 | Re: sustained update load of 1-2k/sec |
Previous Message | Roger Hand | 2005-08-19 07:35:23 | Re: Query plan looks OK, but slow I/O - settings advice? |