From: | Craig James <craig_james(at)emolecules(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: multiple table scan performance |
Date: | 2011-03-29 23:31:26 |
Message-ID: | 4D926BCE.8000001@emolecules.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 3/29/11 3:16 PM, Samuel Gendler wrote:
> I've got some functionality that necessarily must scan a relatively large table. Even worse, the total workload is actually 3 similar, but different queries, each of which requires a table scan. They all have a resultset that has the same structure, and all get inserted into a temp table. Is there any performance benefit to revamping the workload such that it issues a single:
>
> insert into (...) select ... UNION select ... UNION select
>
> as opposed to 3 separate "insert into () select ..." statements.
>
> I could figure it out empirically, but the queries are really slow on my dev laptop and I don't have access to the staging system at the moment. Also, it requires revamping a fair bit of code, so I figured it never hurts to ask. I don't have a sense of whether postgres is able to parallelize multiple subqueries via a single scan
You don't indicate how complex your queries are. If it's just a single table and the conditions are relatively simple, could you do something like this?
insert into (...) select ... where (...) OR (...) OR (...)
Craig
From | Date | Subject | |
---|---|---|---|
Next Message | Marti Raudsepp | 2011-03-30 00:05:08 | Re: multiple table scan performance |
Previous Message | Claudio Freire | 2011-03-29 22:28:07 | Re: multiple table scan performance |