From: | David Busby <busby(at)edoceo(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Large Table Performance |
Date: | 2005-10-22 00:10:13 |
Message-ID: | 43598365.1060004@edoceo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
List,
I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day.
There are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for
import/export transaction id's and I have a serial primary key and an index on a date column for when I need to search
(every search is done inside a date range). I thought it would be OK but after a few weeks of operation I have more
than five million records in there. Some queries take more than five minutes to complete and I'm sad about that. How
can I make this faster? I could munge dates into integers if their faster, I'm OK with that. What can I tweak in the
configuration file to speed things up? What about some drastic schema change that more experience would have shown me?
I cannot show the full schema but it's like this:
-- My import/export data information table
ie_data (id serial primary key, date date, [12 other columns here])
big_transaction_table(id serial primary key, import_id int w/FK, export_id int w/FK, date date, [20 other necessary
transaction detail columns])
So when I say
select x,y,z from big_transaction_table where date>='10/2/2005' and date<='10/4/2005' and transaction_status in (1,2,3)
order by date;
it takes five+ minutes.
TIA for any suggestions.
/djb
From | Date | Subject | |
---|---|---|---|
Next Message | Edoceo Lists | 2005-10-22 00:25:22 | Large Table Performance |
Previous Message | Douglas McNaught | 2005-10-21 23:46:20 | Re: Newbie Questions |