From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | pgsql general <pgsql-general(at)postgresql(dot)org> |
Subject: | Parallel sequential scans |
Date: | 2006-03-24 06:10:31 |
Message-ID: | 1A21C004-7E7B-40B3-8E42-49F8FABCAC88@blighty.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm doing some reporting-type work with PG, with the vast
majority of queries hitting upwards of 25% of the table, so
being executed as seq scans.
It's a fairly large set of data, so each pass is taking quite
a while, IO limited. And I'm looking at doing dozens of
passes.
It would be really nice to be able to do all the work with a
single pass over the table, executing all the queries in
parallel in that pass. They're pretty simple queries, mostly,
just some aggregates and a simple where clause.
There are some fairly obvious ways to merge multiple
queries to do that at a SQL level - converting each query
into a function and passing each row from a select * to
each of the functions would be one of the less ugly.
Or I could fire off all the queries simultaneously and hope
they stay in close-enough lockstep through a single pass
through the table to be able to share most of the IO.
Is there a commonly used trick to doing this that I should
know about?
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Qingqing Zhou | 2006-03-24 06:19:17 | Re: Advantages of PostgreSQL over MySQL 5.0 |
Previous Message | LJJGIS | 2006-03-24 02:18:51 | How Using new created DB |