From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | David Garamond <lists(at)zara(dot)6(dot)isreserved(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: how many record versions |
Date: | 2004-05-24 18:15:07 |
Message-ID: | 40B23BAB.8090007@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greg Stark wrote:
> Well this was actually under Oracle, but I can extrapolate to Postgres given
> my experience.
>
> The idea tool for the job is a feature that Postgres has discussed but hasn't
> implemented yet, "partitioned tables". Under Oracle with partitioned tables we
> were able to drop entire partitions virtually instantaneously. It also made
> copying the data out to near-line backups much more efficient than index
> scanning as well.
I think you can get a similar effect by using inherited tables. Create
one "master" table, and then inherit individual "partition" tables from
that. Then you can easily create or drop a "partition", while still
being able to query the "master" and see all the rows.
HTH,
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Vivek Khera | 2004-05-24 20:55:02 | Re: extreme memory use when loading in a lot of data |
Previous Message | Steve Atkins | 2004-05-24 17:51:43 | Re: how many record versions |