From: | "Gurupartap Davis" <partap(at)yahoo(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | pgsql and large tables |
Date: | 2001-11-13 20:49:50 |
Message-ID: | 096901c16c84$bd1dbc50$0f00a8c0@marlows |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I've got a large database, currently about 40 million rows in the biggest table. I'm trying to set it up with PostgreSQL 7.1.3, but I've got some questions...
I initially created the table and with a primary key (5 fields: char(4), varchar(32), date, int, int) and a couple additional indexes, one that just changes the order of the primary key, and one that sorts on the date. Then I imported 40 million rows and tried some queries, but discovered that none of my queries were using indexes, causing them to take forever.
So I read somewhere in the archives thatyou need to VACUUM a table regularly for indexes to work properly. I tried that, but aborted after about 5 hours. I can't use pgsql if I have to take the db down for more than 10-15 minutes a day.
Then I read somewhere else that you should drop your indexes before VACUUMing and re-create them afterwards. I tried that, and VACUUM finished in about 10 minutes. Kewl... but now I've been trying to recreate my primary key for the last 18 hours...not so good.
Should I have dropped all indexes *except* for the primary? or would VACUUM still take forever that way? Should I make an artificial primary key with a serial type to simplify things? Anyone have any hints at all for me?
thanks,
Partap Davis
From | Date | Subject | |
---|---|---|---|
Next Message | John Gray | 2001-11-13 21:32:59 | Re: xml support |
Previous Message | Cristóvão Dalla Costa | 2001-11-13 20:30:57 | storing binary data |