From: | "Bodanapu, Sravan" <Sravan(dot)Bodanapu(at)NextelPartners(dot)com> |
---|---|
To: | 'Curt Sampson' <cjs(at)cynic(dot)net> |
Cc: | "PGSQL General (E-mail)" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Table Partitioning in Postgres: |
Date: | 2003-02-17 20:42:04 |
Message-ID: | D9C90B51B105D511A3FB00508BFD70E2046DB416@mnmtkex1.nextelpartners.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks Curt!!! The data was actually taken out of Oracle database and then
dumped into Postgres database
using bulk copy. Most of the tables were very large ( having around 20-30
million rows and around
200-300 columns in each ). In Oracle, these tables were partitioned into
chunks to get maximum performance.
1. When a table is created in postgres, it will always create the
datafile in /pgdata/base/16975 or 16976 directory.
What does 16975 and 16976 mean ? Is there a way that the
datafile(for table/data/index) gets generated
in different directories instead of one. If yes, how ?
2. Is there a way to limit a datafile size ( say 3GB ) ? This is a
concept in Ingres that you can span the data
across different files.
3. Please suggest us some tips for setting up a big database to acheive
maximum performance ?
Thanks and Regards,
- Sravan.
-----Original Message-----
From: Curt Sampson [mailto:cjs(at)cynic(dot)net]
Sent: Thursday, February 13, 2003 7:25 AM
To: Bodanapu, Sravan
Cc: PGSQL General (E-mail)
Subject: Re: [GENERAL] Table Partitioning in Postgres:
On Tue, 11 Feb 2003, Bodanapu, Sravan wrote:
> We are trying to migrate a database from Oracle to Postgres which is about
> 150Gig.
> How do you setup and maintain Big tables having around 20-30 million rows
?
> Is there a way to setup table partitioning ? How can I improve the
Postgres
> Database performance for such a bid database ?
I've set up tables with 500 million or more rows just as I would with
any other table. There is no table partitioning per se in postgres, but
you can always modify your application to use separate tables (which I
have also done for some large ones).
As for performance, that is soooo application dependent that you really
probably want to hire a consultant to help you out if you don't have time
to spend studying it yourself.
At the very least, for anything big like this, you'd want to spend
a week or two playing around with your database and application on
postgres before you even think about whether you want to convert or not.
cjs
--
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Conway | 2003-02-17 20:47:02 | PL/R - R procedural language handler for PostgreSQL |
Previous Message | Patrick Nelson | 2003-02-17 20:03:11 | RE in where |