From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: inheritance vs performance |
Date: | 2004-02-13 17:12:27 |
Message-ID: | 20040213171227.GA20149@gp.word-to-the-wise.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Feb 13, 2004 at 01:51:24PM +0000, Richard Huxton wrote:
> On Friday 13 February 2004 10:59, Karsten Hilbert wrote:
> > > Well, thousands of tables is probably "too much", but a hundred tables or
> > > two in a database shouldn't cause problems. Don't see why you'd want them
> > > though.
> >
> > If that's your general advice (a hundred or more tables in a
> > database not making sense) I should like to learn why. Is that
> > a sure sign of overdesign ? Excess normalization ? Bad
> > separation of duty ? I am asking since our schema is at
> > about 200 relations and growing.
>
> The original mail mentioned many "C tables" all with the same columns.
> Obviously you need as many different tables as required to model your data,
> but many tables all with identical schema?
Poor mans tablespaces. It's a trick I've had to resort to a few times
when on a write-heavy steady-state system there's just not enough I/O
bandwidth to delete and vacuum old data, or not enough to maintain an
index. Segregate the incoming data by, say, day and put one days worth
of data into each 'C' table. At the end of each day, index the days table.
If you're maintaining six months of data, drop the 180th table.
If most of the queries on the data are constrained by date it's
reasonably efficient to search too. And if you have rare queries which
aren't constrained by date you can just apply them to the parent table
- not terribly efficient, but quite workable.
Hideous hack, but it works.
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Jackson Miller | 2004-02-13 17:17:55 | SQL99 IGNORE |
Previous Message | Zak McGregor | 2004-02-13 16:32:03 | Re: Join 2 aggregate queries? |