From: | PFC <lists(at)boutiquenumerique(dot)com> |
---|---|
To: | "Mauro Bertoli" <bertolima(at)yahoo(dot)it>, "Richard Huxton" <dev(at)archonet(dot)com> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Postgres performance |
Date: | 2005-03-05 22:53:03 |
Message-ID: | opsm6sipi2th1vuj@musicbox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
> No, I haven't foreign keys in the older version, in
> that new I've it... however I manage relations from
> app code (PHP)...
Really ?
In my experience this is a sure way to get inconsistencies slowly
creeping into your database, and you also get a load of funky concurrency
issues.
> doesn't MYSQL allow to use 'foreign
> keys' in sure and fast way then?
It does, IF you use the InnoDB engine... which is slower than postgres...
and there are a lot of gotchas.>
>> Not for every query, for every CONNECTION.
>> You are using persistant connections are you. Are
> you ?
> I'm using PHP and every user (can be from 1 user to
> 100 users) must connect to the database... do you know
> how I can use persistant connection? I think it's
> impossible... I'm wrong?
Well, first, I get a connection establishment time of about 20 ms in
mysql and 60 ms in postgres. This information is useless as I use
persistent connections, obviously, because it is crazy to spend 20 ms
connecting just to make a 0.5 ms query.
Now, in PHP, you can use mysql_pconnect instead of mysql_connect to get a
persistent connection. mod_php keeps a pool of connections. The same thing
probably applies for postgres, but as I don't use it with PHP (only with
Python) I can't tell. Look in the docs for "persistent connections".
This way, each Apache server process keeps a persistent connection open,
and re-uses it for every page. You save the connection establishment time
and load.
>> > - why connection time is slower? (compared to
>> mySQL)?
Because MySQL forks a thread whereas Postgres forks a process.
>> This is of no importance as everyone uses
>> persistent connections anyway.
> See last answer...
I hope my explanations are useful.
>> And MySQL requires analyze too (read the docs),
>> optimize table which
>> looks like vacuum to me, and sometimes repair
>> table...
> Ok... they are conceptually implemented in the same
> mode...
Well, not really.
For instance when you make joins, postgres will look the ANALYZE stats
and say "Hm, this value seems rare, I'll use an index scan to get these
few values" or "This column has few distinct values, I'll better load them
all into a hash before joining to this big table instead of making a lot
of index scans"... it can get a lot more complicated.
MySQL thinks "I see indexed column, I don't know what a hash join is,
thus I use index."
Both try to estimate the size of result sets to choose plans, postgres
generally does it well, mysql sometimes can do something which happens to
work, most of the time it makes no diference.
But using the MySQL analyze seems to speed up some of my queries, though.
I don't think it has such detailed stats as postgres, though.
Point is, if the query gets complex, forget MySQL...
From | Date | Subject | |
---|---|---|---|
Next Message | Stefan Weiss | 2005-03-06 04:42:14 | Links between rows in a table |
Previous Message | Josh Berkus | 2005-03-05 22:04:52 | Re: Postgresql FK to MS SQL triggers |