From: | Roger Tannous <roger(dot)tannous(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Optimal database table optimization method |
Date: | 2010-03-04 19:32:46 |
Message-ID: | f45202fa1003041132o52badd9x560dc843cb999f9@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I have a database table that is growing too big (few hundred million rows)
that needs to be optimized, but before I get into partitioning it, I thought
I'd ask about suggestions.
Here is the usage:
0 . Table contains about 10 columns of length about 20 bytes each.
1.
INSERTS are performed at a rate of hundreds of times per second.
2.
SELECT statements are performed based on column 'a' (where a='xxxx' ) a
few times per hour.
3.
DELETE statements are performed based on a DATE column. (delete where
date older than 1 year) usually once per day.
4.
The key requirement is to speed up INSERT and SELECT statements, and be able
to keep history data of 1 year back without locking the whole table down
while deleting.
I would guess that I must have two indexes, one for column 'a', and the
other for the date field. or is it possible to optimize both ?
Will there be a necessary trade-off between speed on select and speed of
delete?
Is partitioning the only solution ? What are good strategies for
partitioning such table?
I'm using a PostgreSQL 8.4 database.
Best Regards,
Roger Tannous.
From | Date | Subject | |
---|---|---|---|
Next Message | Terry | 2010-03-04 19:44:41 | Re: join from multiple tables |
Previous Message | Simon Riggs | 2010-03-04 19:25:37 | Re: The REAL cost of joins |