From: | Yonatan Ben-Nes <da(at)canaan(dot)co(dot)il> |
---|---|
To: | Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>, Dawid Kuroczko <qnex42(at)gmail(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, jnasby(at)pervasive(dot)com, bruno(at)wolff(dot)to, smarlowe(at)g2switchworks(dot)com |
Subject: | Re: How many insert + update should one transaction handle? |
Date: | 2005-09-26 18:41:03 |
Message-ID: | 433840BF.9070906@canaan.co.il |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Robert Treat wrote:
> On Fri, 2005-09-23 at 14:53, Dawid Kuroczko wrote:
>
>>On 9/23/05, Yonatan Ben-Nes < da(at)canaan(dot)co(dot)il <mailto:da(at)canaan(dot)co(dot)il> >
>>wrote:
>>
>>Hi all,
>>
>>Every few days I need to DELETE all of the content of few tables and
>>INSERT new data in them.
>>The amount of new data is about 5 million rows and each row get about 3
>>queries (INSERT + UPDATE).
>>
>
> <snip>
>
>>Or you could even try (haven't tested it):
>>BEGIN;
>>CREATE new_table;
>>SELECT INTO new_table * FROM temp_table;
>>DROP TABLE table;
>>ALTER TABLE new_table RENAME TO table;
>>COMMIT; -- leaving you with fresh 5mln new tuples table
>>...with a risk of loosing all the changes made to old table after BEGIN;
>>
>
>
> yeah, i was thinking
>
> create newtable;
> ~~ load data into newtable
>
> begin;
> drop oldtable;
> alter table newtable rename to oldtable
> commit;
>
> this seperates the data loading piece from the piece where you promote
> the data to live data, plus then the time you have to hold the
> transaction open is only for the drop and rename, which will be quite
> fast.
>
> the only potential issues would be making sure you dont have FK/View
> type issues, but it doesn't sound like it would apply here.
>
>
> Robert Treat
Sorry everyone for not responding... I just didnt know that the
discussion continued :)
Anyway I saw the idea:
BEGIN;
CREATE new_table;
SELECT INTO new_table * FROM temp_table;
DROP TABLE table;
ALTER TABLE new_table RENAME TO table;
COMMIT;
Where if I understood correctly "table" is the final table, "temp_table"
is the table that receive all the proccess and at the end of it got
10mil delete tuples and 5mil active and finally "new_table" is the
receipent of all of the active tuples from "temp_table".
Its looking quite promising to me but I did alittle check and saw that
between the drop table command & the commit I get a lock on the table
(obvious but problematic to a 24/7 site) so im wondering to myself how
much time such a transaction will take from the drop command point?
If users wont be able to access the table for some extremly small amount
of time (less then a second obviously) then though I dont like it much
it is better then running a vacuum full which will slow all my server
for a considerable amount of time...
So anyone know how much time does such a process take? (tried to explain
analyze it with no success :)).
Thanks alot everyone!
Ben-Nes Yonatan
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Wiersig | 2005-09-26 18:58:54 | Re: SQL command to dump the contents of table failed: PQendcopy() |
Previous Message | Mike Nolan | 2005-09-26 18:32:38 | Re: Data Entry Tool for PostgreSQL |