From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Eliot Gable <egable+pgsql-performance(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Replacing Cursors with Temporary Tables |
Date: | 2010-04-22 01:21:28 |
Message-ID: | g2l603c8f071004211821kdc725261pb51e4487892e8a4b@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I think it's really tough to say how this is going to perform. I'd
recommend constructing a couple of simplified test cases and
benchmarking the heck out of it. One of the problems with temporary
tables is that every time you create a temporary table, it creates a
(temporary) record in pg_class; that can get to be a problem if you do
it a lot. Another is that for non-trivial queries you may need to do
a manual ANALYZE on the table to get good stats for the rest of the
query to perform well. But on the flip side, as you say, nesting and
unnesting of arrays and function calls are not free either. I am
going to hazard a SWAG that the array implementation is faster UNLESS
the lack of good stats on the contents of the arrays is hosing the
performance somewhere down the road. But that is really just a total
shot in the dark.
Another possible implementation might be to have a couple of permanent
tables where you store the results. Give each such table a "batch id"
column, and return the batch id from your stored procedure. This
would probably avoid a lot of the overhead associated with temp tables
while retaining many of the benefits.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2010-04-22 03:13:16 | Re: Replacing Cursors with Temporary Tables |
Previous Message | Scott Carey | 2010-04-22 01:21:10 | Re: Very high effective_cache_size == worse performance? |