From: | Marek Lewczuk <newsy(at)lewczuk(dot)com> |
---|---|
To: | Shashi Gireddy <shashi(at)cs(dot)ua(dot)edu> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: too slow |
Date: | 2005-02-09 17:01:37 |
Message-ID: | 420A41F1.5070007@lewczuk.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Shashi Gireddy napisał(a):
> I recently migrated from MySql, The database size in mysql was 1.4GB (It is a static database). It generated a dump file (.sql) of size 8GB), It took 2days to import the whole thing into postgres. After all the response from postgres is a disaster. It took 40sec's to run a select count(logrecno) from sf10001; which generated a value 197569. And It took for ever time to display the table. How to optimize the database so that I can expect faster access to data.
>
> each table has 70 colsX197569 rows (static data), like that I have 40 tables, Everything static.
>
> system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
First of all you should make VACUUM FULL ANALYZE for the all tables
(http://www.postgresql.org/docs/8.0/interactive/sql-vacuum.html) - this
should solve the problem. However you should also think about changing
table structure, because PostgreSQL needs different indexes than MySQL.
A few months ago I had the same problem - but after vacuuming, making
proper indexes everything is working like a charm. Believe me that you
can achieve the same speed - it is only a matter of good db structure
and environment settings
(http://www.postgresql.org/docs/8.0/interactive/runtime.html)
Regards,
ML
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2005-02-09 17:15:03 | Re: too slow |
Previous Message | Joshua D. Drake | 2005-02-08 18:13:41 | Re: too slow |