From: | "Peter Darley" <pdarley(at)kinesis-cem(dot)com> |
---|---|
To: | "Pgsql-General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Temp Tables |
Date: | 2002-03-07 17:00:21 |
Message-ID: | NNEAICKPNOGDBHNCEDCPAEKDCFAA.pdarley@kinesis-cem.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Friends,
I'm hoping someone can give me some advice on a problem I'm having.
I have a system that produces 3060 reports each night. Each report uses a
temp table to limit the data set to the appropriate records, then joins that
temp table to other tables to produce its output. I'm a bit concerned that
this gets slower as the process goes on and the database gets more inserts
into those tables, etc. It looks like there are several ways I could handle
this, and I'm hoping someone can tell me which one is faster, or give me
some advice for testing which one is faster without actually running all the
options (which could easily take 50 hours, which I can't spare the db server
for).
1) I could reuse the same temp table and delete from it at the beginning of
every report.
2) I could drop the temp table and rebuild it at the beginning of every
report.
3) I could not drop the temp table, and create a new one for every report,
leaving cleanup to PostgreSQL when I disconnect from the db.
4) I could not drop the temp table, and for every report disconnect from
the db forcing cleanup of it, reconnect and recreate the table
5) I could create a real (non temp) table and reuse it for every report.
If anyone has some insight to the overhead of using temp tables that they
can pass on to me I would be very grateful.
Thanks,
Peter Darley
From | Date | Subject | |
---|---|---|---|
Next Message | David Blood | 2002-03-07 17:01:46 | indices |
Previous Message | Ben | 2002-03-07 16:55:48 | Re: postgre performance question |