From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | multiple table scan performance |
Date: | 2011-03-29 22:16:19 |
Message-ID: | AANLkTim2zSEznvRPJNqrDTfTmgRNCBWZs1KpdjTGEOL2@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I've got some functionality that necessarily must scan a relatively large
table. Even worse, the total workload is actually 3 similar, but different
queries, each of which requires a table scan. They all have a resultset
that has the same structure, and all get inserted into a temp table. Is
there any performance benefit to revamping the workload such that it issues
a single:
insert into (...) select ... UNION select ... UNION select
as opposed to 3 separate "insert into () select ..." statements.
I could figure it out empirically, but the queries are really slow on my dev
laptop and I don't have access to the staging system at the moment. Also,
it requires revamping a fair bit of code, so I figured it never hurts to
ask. I don't have a sense of whether postgres is able to parallelize
multiple subqueries via a single scan
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2011-03-29 22:28:07 | Re: multiple table scan performance |
Previous Message | Jesper Krogh | 2011-03-29 19:47:21 | Re: Intel SSDs that may not suck |