From: | Christopher Petrilli <petrilli(at)gmail(dot)com> |
---|---|
To: | Shashi Gireddy <gireddy(at)gmail(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: too slow |
Date: | 2005-02-08 18:08:55 |
Message-ID: | 59d991c405020810082a3683f7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, 8 Feb 2005 11:51:22 -0600, Shashi Gireddy <gireddy(at)gmail(dot)com> wrote:
> I recently migrated from MySql, The database size in mysql was 1.4GB
> (It is a static database). It generated a dump file (.sql) of size
> 8GB), It took 2days to import the whole thing into postgres. After all
> the response from postgres is a disaster. It took 40sec's to run a
> select count(logrecno) from sf10001; which generated a value 197569.
> And It took for ever time to display the table. How to optimize the
> database so that I can expect faster access to data.
>
> each table has 70 colsX197569 rows (static data), like that I have 40
> tables, Everything static.
>
> system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
A couple things...
1. Did you run VACCUM ANALYZE on the database after importing?
2. Did you use COPY or INSERT to get data into the database? THere's
an almost 100x difference.
3. What does the schema look like.
count(foo) is an expensive operation on PostgreSQL (and Oracle),
because of the multi-version system, so it's something that you
shouldn't use if you can avoid it. Other than that, I've found
PostgreSQL to be much faster than MySQL when under load, and when
mutliple people are involved, especially if there's a writer anywhere.
Chrs
--
| Christopher Petrilli
| petrilli(at)gmail(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2005-02-08 18:13:41 | Re: too slow |
Previous Message | Shashi Gireddy | 2005-02-08 17:51:22 | too slow |