From: | Nikola Milutinovic <Nikola(dot)Milutinovic(at)ev(dot)co(dot)yu> |
---|---|
To: | "'PGSQL-Novice'" <pgsql-novice(at)postgresql(dot)org>, PgSQL JDBC <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Using PgSQL in high volume and throughput problem |
Date: | 2005-05-12 05:34:42 |
Message-ID: | 4282EAF2.3000305@ev.co.yu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc pgsql-novice |
Hi all.
This might be OT, especiallz since I do not have the actual data for
volume and throughput, but can give only "bystander" impression.
I might get on-board a project that deals with a high volume and high
throughput data crunching task. It will involve data pattern recognition
and similar stuff. The project is meant to run in Java and use Berkeley
DB, probably Apache Lucene.
Now, this is the juicy part. Due to high volume and high throughput,
data is actually stored in ordinary files, while Berkeley DB is used
only for indexes to that data!
Like I've said, I don't have the figures, but I was told that that was
the only way to make it work, everything else failed to perform. My
question, in your oppinion, can PgSQL perform in such a scenario? Using
JDBC, of course.
I do realize that PgSQL gives a lot of good stuff, but here the speed is
of essence. The previous project has stripped Java code to the bare
bones, regarding data structures, just to make it faster.
Nix.
From | Date | Subject | |
---|---|---|---|
Next Message | David Teran | 2005-05-12 09:21:12 | Re: PostgreSQL, WebObjects and fetchSize |
Previous Message | Dave Cramer | 2005-05-12 00:46:13 | Re: PostgreSQL/Tomcat JNDI Datasource Questions |
From | Date | Subject | |
---|---|---|---|
Next Message | Vitaly Belman | 2005-05-12 07:31:36 | Autocommit in Postgresql |
Previous Message | Tom Lane | 2005-05-11 14:05:02 | Re: First plpgsql Script |