Re: Splitting one big table into smaller ones

From: Ed Loehr <eloehr(at)austin(dot)rr(dot)com>
To: Guillaume Perréal <perreal(at)lyon(dot)cemagref(dot)fr>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Splitting one big table into smaller ones
Date: 2000-04-07 16:52:09
Message-ID: 38EE1239.BFAEA600@austin.rr.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Guillaume Perréal wrote:
>
> I've got a big table of measures like that:
>
> CREATE TABLE measures (
> stationCode varchar(8),
> when datetime,
> value float,
> quality char,
> PRIMARY KEY (stationCode, when)
> );
>
> Each station (identified by stationCode) could have up to 10**6 measures. So I
> want to split it into smaller tables to increase perfomance :
>
> CREATE TABLE measures<1st stationCode> (
> when datetime,
> value float,
> quality char,
> PRIMARY KEY (when)
> );
> CREATE TABLE measures<2nd stationCode> (
> ... and so on.
>
> Is there a way to handle that using SQL and PL/pgSQL languages ?

Pretty straight-forward to do that with perl/DBI, or even bash & friends. But I recall that PL/pgSQL has major problems with CREATE TABLE and other DDL statements.

BTW, I suspect your easiest significant performance improvement would come from adding an integer primary key for stationCode rather than a varchar key, eg.,

CREATE TABLE stations (
id serial, # or just integer
code varchar(*),
...
);

CREATE TABLE measures (
stationId integer,
when datetime,
value float,
quality char,
PRIMARY KEY (stationId, when)
);

If you try this, I'd be curious to hear the results. Your approach is surely faster, though by how much I don't know. It just looks like a bit of a headache to manage, that's all.

Regards,
Ed Loehr

In response to

Browse pgsql-general by date

  From Date Subject
Next Message The Hermit Hacker 2000-04-07 16:53:18 Re: Changes doc for 7.0 beta 4?
Previous Message Andy Lewis 2000-04-07 16:38:37 Re: Searching www.postgresql.org - TIP