From: | "Shashank Tripathi" <shashank(dot)tripathi(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: More grist for the PostgreSQL vs MySQL mill |
Date: | 2007-01-22 02:30:40 |
Message-ID: | 7cab9c1b0701211830x54a37981t77a0c5ef5e91334b@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> select something from othertable;
> select * from table where table_id in (?, ?, ?, ?, ?, ?, ?, ...)
This is what MySQL's CEO Martin said in an interview on Slashdot. If
we can manage two queries as above through, say, a PHP application,
with each executing in 0.004 seconds, then an optimized subquery needs
to be beat the 0.008 mark to be a viable alternative.
This works for most people who're not looking for elegance of database
design. I am assuming MYISAM engine with indexes on the WHERE columns.
Granted, this is neither future nor fool proof, but in high traffic
environments, it is all worth the drop in elegance.
The problem is when the number of rows exceeds 30 million, MySQL
performance degrades substantially. For most people, this is not an
issue. PG is solid with huge databases, but in my experience, even the
most optimized subselect on PG will not return a value in 0.008
seconds on 10 million rows -- I'd appreciate other experiences.
Shanx
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2007-01-22 02:48:17 | Re: documentation vs reality: template databases |
Previous Message | gustavo halperin | 2007-01-21 23:56:52 | triggers and TriggerData |