From: | Alex Avriette <a_avriette(at)acs(dot)org> |
---|---|
To: | "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org> |
Subject: | Suitability of postgres for very high transaction volume |
Date: | 2001-12-10 14:01:59 |
Message-ID: | 32BAF2A2B169D411A081009027464529025DB3E7@ATD-NT5 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm intending to use postgres as a new backend for a server I am running.
The throughput is roughly 8gb per day over 10,000 concurrent connections. At
the moment, the software in question is using complex hashes and b-trees. My
feeling was that the people who wrote postgres were more familiar with
complex data storage, and it would be faster to offload to postgres the task
of indexing files and whatnot. So its function would be as a
pseudo-filesystem with searching capabilities and also as a
userdb/authenticationdb. I'm using perl's POE, so there could conceivably be
several dozen to even a hundred or more concurrent queries. The amount of
data exchange in these queries would be very small. But over the course of a
day, it will add up to quite a bit. The server in question has a gig of ram
and sits on a T1.
At the moment, I use postgres for storing phenomenal amounts of data
(terabyte scale), but the transaction load is very small in comparison. (the
server I am migrating gets something like 6M - 9M hits/day)
Has anyone attempted to use postgres in this fashion? Are there steps I
should take here?
Thanks,
alex
--
alex j. avriette
perl hacker.
a_avriette(at)acs(dot)org
$dbh -> do('unhose');
From | Date | Subject | |
---|---|---|---|
Next Message | Tielman J de Villiers | 2001-12-10 14:27:15 | Re: Analyzer for postgresql.log |
Previous Message | Steve Brett | 2001-12-10 13:01:46 | Re: Remote Access to pgsql DB ??? |