From: | Karel Zak <zakkr(at)zf(dot)jcu(dot)cz> |
---|---|
To: | Hannu Krosing <hannu(at)tm(dot)ee> |
Cc: | Jean-Paul ARGUDO <jean-paul(dot)argudo(at)idealx(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Oracle vs PostgreSQL in real life |
Date: | 2002-03-01 09:28:46 |
Message-ID: | 20020301102845.A17401@zf.jcu.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Feb 28, 2002 at 06:21:34PM +0200, Hannu Krosing wrote:
> On Thu, 2002-02-28 at 15:58, Karel Zak wrote:
> > On Thu, Feb 28, 2002 at 02:39:47PM +0100, Jean-Paul ARGUDO wrote:
> >
> > > 2) if prepared statments and stored execution plan exist, why can't thos be used
> > > from any client interface or simple sql?
> >
> > There is "execute already parsed query plan" in SPI layout only.
> > The PostgreSQL hasn't SQL interface for this -- except my experimental
> > patch for 7.0 (I sometime think about port it to latest PostgreSQL
> > releace, but I haven't relevant motivation do it...)
>
> I did some testing
>
> 5000*20 runs of update on non-existing key
>
> (send query+parse+optimise+update 0 rows)
>
> [hannu(at)taru abench]$ time ./abench.py 2>/dev/null
>
> real 0m38.992s
> user 0m6.590s
> sys 0m1.860s
>
> 5000*20 runs of update on random existing key
>
> (send query+parse+optimise+update 1 row)
>
> [hannu(at)taru abench]$ time ./abench.py 2>/dev/null
>
> real 1m48.380s
> user 0m17.330s
> sys 0m2.940s
>
>
> the backend wallclock time for first is 39.0 - 6.6 = 32.4
> the backend wallclock time for second is 108.4 - 17.3 = 91.1
>
> so roughly 1/3 of time is spent on
>
> communication+parse+optimize+locate
>
> and 2/3 on actually updating the tuples
>
> if we could save half of parse/optimise time by saving query plans, then
> the backend performance would go up from 1097 to 100000/(91.1-16.2)=1335
> updates/sec.
It depend on proportion between time-in-parser and time-in-executor. If
your query spend a lot of time in parser and optimizer is a query plan
cache interesting for you. Because the PostgreSQL has dynamic functions
and operators the time in parser can be for some queries very interesting.
We have good notion about total queries time now (for example from
bench tests), but we haven't real time statistics about path-of-query
in backend. How long time spend a query in the parser, how log in the
optimizer or executor? (... maybe use profiling, but I not sure
with it). All my suggestion for memory managment was based on result
of control messages those I wrote into mmgr. And for example Tom was
surprised of often realloc usage. I want say, we need more and more
data from code, else we can't good optimize it ;-)
suggestion: "TODO: solid path-of-query time profiling for developers" :-)
Karel
--
Karel Zak <zakkr(at)zf(dot)jcu(dot)cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
From | Date | Subject | |
---|---|---|---|
Next Message | pgsql-bugs | 2002-03-01 09:50:26 | Bug #605: timestamp(timestamp('a timestamp)) no longer works |
Previous Message | Christof Petig | 2002-03-01 08:17:03 | Re: Arrays vs separate system catalogs |