From: | Markus Schaber <schabi(at)logix-tt(dot)com> |
---|---|
To: | david drummard <vijayspam(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: help required in design of database |
Date: | 2006-02-13 11:49:15 |
Message-ID: | 43F0723B.3040505@logix-tt.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi, David,
david drummard wrote:
> 1) create a new table every time a new feed file comes in. Create table
> with indexes. Use the copy command to dump the data into the table.
Its faster to obey the following order:
- Create the table
- COPY the data into the table
- Create the indices
- ANALYZE the table.
and probably CLUSTER the table on the most-used index, between index
creation and ANALYZE. You also might want to increase the statistics
target on some columns before ANALYZE, depending on your data.
> 2) rename the current table to some old table name and rename the new
> table to current table name so that applications can access them directly.
You can also use a view, and then use CREATE OR REPLACE VIEW to switch
between the tables.
But two table renames inside a transaction should do as well, and
shorten the outage time, as with the transaction encapsulation, no
external app should see the change inside their transaction.
HTH,
Markus
--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS
Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Jones | 2006-02-13 16:21:58 | Re: joining two tables slow due to sequential scan |
Previous Message | PFC | 2006-02-13 08:55:20 | Re: 10+hrs vs 15min because of just one index |