From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Alexander Elgert <alexander_elgert(at)adiva(dot)de> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: postgres slower on nested queries |
Date: | 2007-03-07 12:01:17 |
Message-ID: | 45EEA98D.2070805@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Alexander Elgert wrote:
>
> This results in a structure where I can itereate over all keys in the
> 2-dim array.
> You can see I iterate first over the databases and then over table AND
> columns!
> --- mysql: ~1s (Database X)
> --- postgres: ~1s (Database Y)
> ;)
>
> In contrast: =======================================================
>
> foreach database {
> foreach table {
> foreach column {
> do something ...
> }
> }
> }
> --- mysql: ~1s (Database X)
> --- postgres: ~80s (Database Y)
> ;(
>
>>> The second approach ist much faster, this must be because there is no
>>> nesting. ;(
>>
>> What nesting? Are you trying to do sub-queries of some sort?
> I did a loop over all tables and THEN calling a query for each table to
> get the columns (from the same table).
> Yes, there are definitively more queries the DBMS has to manage.
> (It is a bad style, but it is intuitive. Maybe the overhead of a single
> query is more time consuming than in mysql.)
I think I see what you're doing now. As Tom says, the information_schema
has overheads but I must say I'm surprised at it taking 80 seconds.
I can see how you might find it more intuitive. I think the other way
around myself - grab it all then process it.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Markus Schiltknecht | 2007-03-07 12:23:09 | Re: real multi-master replication? |
Previous Message | Martijn van Oosterhout | 2007-03-07 11:56:30 | Re: postgres slower on nested queries |