From: | Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pgstats_initstats() cost |
Date: | 2003-08-12 02:38:39 |
Message-ID: | Pine.LNX.4.21.0308121233390.31574-100000@linuxworld.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 11 Aug 2003, Andrew Dunstan wrote:
>
> ----- Original Message -----
> From: "Gavin Sherry" <swm(at)linuxworld(dot)com(dot)au>
> > I am still researching ways of increasing performance of yacc parsers --
> > there is a very small amount of information on the Web concerning this --
>
> I know some people who will tell you that the best way of improving
> performance in this area is not to use yacc (or bison) parsers ...
Yes. Cost of maintenance vs. performance cost...
>
> OTOH we need to understand exactly what you were profiling - if it is 1
> dynamic sql statement per insert then it might not be too close to the real
> world - a high volume program is likely to require 1 parse per many many
> executions, isn't it?
I wasn't interested in measuring the performance of yacc -- since I know
it is bad. It was a basic test which wasn't even meant to be real
world. It just seemed interesting that the numbers were three times slower
than other databases I ran it on. Here is the script which generates the
SQL:
echo "create table abc(t text);"
echo "begin;"
c=0
while [ $c -lt 100000 ]
do
echo "insert into abc values('thread1');";
c=$[$c+1]
done
echo "commit;"
Thanks,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Browne | 2003-08-12 02:58:18 | On Linux Filesystems |
Previous Message | Andrew Dunstan | 2003-08-12 02:31:21 | Re: pgstats_initstats() cost |