From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Error handling in plperl and pltcl |
Date: | 2004-11-20 21:09:41 |
Message-ID: | 216.1100984981@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
> I get about 6900 vs 12800 msec, so for a simple pre-planned query
> it's not quite a 50% overhead.
However, that was yesterday ;-). I did some profiling and found some
easy-to-knock-off hotspots. Today I'm measuring about 25% overhead
for a simple SELECT, which I think is entirely acceptable considering
the cleanliness of definition that we're buying.
I changed my test cases to be
create or replace function foo(int,int) returns int as '
declare x int;
begin
for i in 1 .. $1 loop
select into x unique1 from tenk1 where unique2 = $2;
end loop;
return x;
end' language plpgsql;
create or replace function foos(int,int) returns int as '
declare x int;
begin
for i in 1 .. $1 loop
begin
select into x unique1 from tenk1 where unique2 = $2;
exception
when others then null;
end;
end loop;
return x;
end' language plpgsql;
so as to minimize the extraneous overhead --- I think this is a harder
test (gives a higher number) than what I was doing yesterday.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-11-20 21:18:59 | Re: pgxs regression |
Previous Message | Joe Conway | 2004-11-20 19:57:55 | Re: pgxs regression |