From: | Curt Sampson <cjs(at)cynic(dot)net> |
---|---|
To: | Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> |
Cc: | pgsql General List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: One particular large database application |
Date: | 2002-04-23 11:04:23 |
Message-ID: | Pine.NEB.4.43.0204232002080.445-100000@angelic.cynic.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 23 Apr 2002, Lincoln Yeoh wrote:
> Not sure how that would be implemented for postgresql. It seems simple to
> support _many_ read only queries at a time using many pcs. But how would
> one speed up a few large parallel queries that way?
In my case I'm dealing with data spread across a known range of
dates. So I partition it into separate tables (with identical schema
definitions) based on the date (e.g., a table for January, a table
for February, and so on).
Then when I get a query, I just have to parcel it out to the
appropriate machines and merge the results that I get.
cjs
--
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
From | Date | Subject | |
---|---|---|---|
Next Message | Lincoln Yeoh | 2002-04-23 11:13:50 | Re: One particular large database application |
Previous Message | Ian Cass | 2002-04-23 09:35:59 | Date indexing |