From: | "Vilson farias" <vilson(dot)farias(at)digitro(dot)com(dot)br> |
---|---|
To: | <pgsql-admin(at)postgresql(dot)org> |
Subject: | Deleting large amount of data. |
Date: | 2002-08-26 19:26:14 |
Message-ID: | 00ee01c24d36$7246a3c0$98a0a8c0@dgtac |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Greetings,
I'm having some perfomance problems related with large amount of tuples in a table, so I decided to start programming some shell scripts to be placed in crontad to remove old data every night.
Now, my questions are :
If I just execute a DELETE * FROM WHERE call_start < '2002-08-21' and call_start > '2002-08-20', and that interval has 100.000 tuples, does DELETE start and commit a transaction for every single deletion?
And If I put a BEGIN at the beginning and a COMMIT at the end, do deleted tuples overload transaction logs?
Finally, what's the best thing to do?
Best regards.
--------------------------------------------------------------------------------
José Vilson de Mello de Farias
Software Engineer
Dígitro Tecnologia Ltda - www.digitro.com.br - Brazil
APC - Customer Oriented Applications
E-mail: vilson(dot)farias(at)digitro(dot)com(dot)br
Tel.: +55 48 281 7158
ICQ 11866179
--------------------------------------------------------------------------------
José Vilson de Mello de Farias
Analista de Sistemas
Dígitro Tecnologia Ltda - www.digitro.com.br
APC - Aplicativos Orientados ao Cliente
E-mail: vilson(dot)farias(at)digitro(dot)com(dot)br
Tel.: +55 48 281 7158
ICQ 11866179
From | Date | Subject | |
---|---|---|---|
Next Message | Stephan Szabo | 2002-08-26 19:49:27 | Re: Deleting large amount of data. |
Previous Message | Laurette Cisneros | 2002-08-26 18:19:27 | Monitoring postgresql |