From: | "Nikolaus Dilger" <nikolaus(at)dilger(dot)cc> |
---|---|
To: | raymond(dot)chui(at)noaa(dot)gov |
Cc: | pgsql-admin(at)postgresql(dot)org, pgadmin-support(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org |
Subject: | Re: [ADMIN] Fast Deletion For Large Tables |
Date: | 2002-10-06 03:17:38 |
Message-ID: | 20021005201740.20309.h016.c001.wm@mail.dilger.cc.criticalpath.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgadmin-support pgsql-admin pgsql-general |
Raymond,
Partitioned tables would solve your issue since you
could just truncate a partiotion in order to delete the
unneeded data. Unfortunately they are not available in
PostgreSQL. But maybe in a future release.
Unfortunately you also cannot use a UNION ALL in a view
to fake partitions.
So only several changes to your applicatin would allow
you to partion your data. But depending on your
situation it may be worthwhile. E.g. if you always
want to keep one weeks worth of data and purge a day at
a time you could have 7 tables instead of one.
table1_mon, table1_tue, table1_wed, etc. And you could
select from all of them with a UNION ALL.
There is no rowid in PostgreSQL. So while adding an
additional column with a sequence would help as long as
you have an index. Just indexing your date column
maybe easier. But then you pay with slower inserts or
the time and effort to build the index.
Regards,
Nikolaus Dilger
On Wed, 02 October 2002, Raymond Chui wrote:
>
>
> I have some tables with huge data.
> The tables have column timestamp and float.
> I am try to keep up to 6 day of data values.
> What I do is execute SQL below from crontab (UNIX to
> schedule commands).
>
> BEGIN;
> DELETE FROM table_1 WHERE column_time <
> ('now'::timestamp - '6
> days'::interval);
> .....
> DELETE FROM table_n WHERE column_time <
> ('now'::timestamp - '6
> days'::interval);
> COMMIT;
>
>
> Everything is running fine, except take long time to
> finish.
> Because some tables stored values from 50,000 to
> 100,000 rows
> Some deletion need to deleted up to 45,000 rows.
>
> So I am thinking just delete the rows by their row
> number or row ID,
> like
>
> DELETE FROM a_table WHERE row_id < 45000;
>
> I know there is row_id in Oracle.
> Is there row_id for a table in Postgres?
> Thank Q!
From | Date | Subject | |
---|---|---|---|
Next Message | Shridhar Daithankar | 2002-10-07 12:47:53 | Re: [GENERAL] Fast Deletion For Large Tables |
Previous Message | Hubert depesz Lubaczewski | 2002-10-05 08:21:36 | Re: Fast Deletion For Large Tables |
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Miles | 2002-10-06 13:06:49 | Re: PostgreSQL and 2-node failover cluster solutions |
Previous Message | Nikolaus Dilger | 2002-10-06 03:06:02 | Re: 3 million record updates |
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2002-10-06 04:06:03 | Re: multi-column btree index for real values |
Previous Message | Thomas O'Dowd | 2002-10-06 03:00:22 | understanding insert slowdown |