Re: PostgreSQL Indexing versus MySQL

From: Adam Ruth <aruth(at)intercation(dot)com>
To: "Chris Fossenier" <chris(at)engenuit(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: PostgreSQL Indexing versus MySQL
Date: 2004-02-15 19:32:28
Message-ID: B0904D2C-5FED-11D8-A252-000A959D1424@intercation.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I had this same problem not long ago, and it caused me go with
PostgreSQL over MySQL on a project. The problem you're describing does
not exist in PostgreSQL. Creating a new index on a table does not
require recreating the other indexes. MySQL also has the same problem
if you add or drop columns, all indexes need to be recreated. Again,
not a problem in PSQL. You'll find times similar to Oracle and MS SQL
(I've never directly compared them, but they feel about the same).

Regards,

Adam Ruth

On Feb 11, 2004, at 1:58 PM, Chris Fossenier wrote:

> I currently have a client with a database that must hold 125 million
> records and all tallied about 250 fields.
>  
> The database has been normalized and indexed appropriately.
>  
> If any of you have worked with MySQL, you will have discovered that
> indexing is very limited. You can only have one index file per table.
> The indexing process actuallly creates a full copy of the original
> table and once you get above 2 indexes with 125million records, it is
> extremely slow.
>  
> Should I even bother trying PostgreSQL to resolve this issue?
>  
> We can generate the same indexes in MS SQL and Oracle in a fraction of
> the amount of time when held up to MySQL.
>  
> Thanks
>  
> Chris.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephan Szabo 2004-02-15 20:22:11 Re: Confused newbie (to PG not DB's)
Previous Message David Helgason 2004-02-15 18:49:04 Re: nonblocking libpq large object access?