From: | John R Pierce <pierce(at)hogranch(dot)com> |
---|---|
To: | Roger Tannous <roger(dot)tannous(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Optimal database table optimization method |
Date: | 2010-03-04 20:32:53 |
Message-ID: | 4B9018F5.8090400@hogranch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Roger Tannous wrote:
>
> Hello,
>
> I have a database table that is growing too big (few hundred million
> rows) that needs to be optimized, but before I get into partitioning
> it, I thought I'd ask about suggestions.
>
> Here is the usage:
>
> 0 . Table contains about 10 columns of length about 20 bytes each.
>
> 1.
>
> INSERTS are performed at a rate of hundreds of times per second.
>
> 2.
>
> SELECT statements are performed based on column 'a' (where
> a='xxxx' ) a few times per hour.
>
> 3.
>
> DELETE statements are performed based on a DATE column. (delete
> where date older than 1 year) usually once per day.
>
> 4.
>
>
> The key requirement is to speed up INSERT and SELECT statements, and
> be able to keep history data of 1 year back without locking the whole
> table down while deleting.
>
> I would guess that I must have two indexes, one for column 'a', and
> the other for the date field. or is it possible to optimize both ?
>
> Will there be a necessary trade-off between speed on select and speed
> of delete?
>
> Is partitioning the only solution ? What are good strategies for
> partitioning such table?
>
> I'm using a PostgreSQL 8.4 database.
>
we partition similar tables by date, typically by month. this way you
can simply drop an old month table after updating the triggers to put
new data into a new month table.
.
From | Date | Subject | |
---|---|---|---|
Next Message | Niranjan Maturi (nmaturi) | 2010-03-04 20:43:15 | Re: Failed to run initdb: 128 |
Previous Message | Joshua Johnston | 2010-03-04 19:53:55 | Re: Can you set the date output format (to_char style) per session? |