From: | Jasen Betts <jasen(at)xnet(dot)co(dot)nz> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Trigger Performance |
Date: | 2011-01-16 03:46:43 |
Message-ID: | igtpn3$hjv$1@reversiblemaps.ath.cx |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2011-01-15, Randall Smith <randall(at)tnr(dot)cc> wrote:
> Hi,
>
> I've created a trigger that checks the uniqueness of two columns in a
> table. Traditionally, one would use a unique constraint, but in my
> case, the size of the unique index would be too large and some
> performance loss is acceptable. However, the trigger performance seems
> to be far below what's indicated by an explain analyze of the query used
> in the trigger.
>
> The unique fields consist of a an indexed int8 (volume_id) and a text
> field (name). The average ratio of volume_id to name is 1 to 10,000.
> The query I'm using to check uniqueness in the trigger is:
>
> ...
> IF (SELECT EXISTS (SELECT 1 FROM t1 WHERE
> volume_id = NEW.volume_id AND name = NEW.name)) THEN
> RAISE EXCEPTION '% already exists on volume', NEW.name;
> END IF;
> ...
In plpgsql IF is an implicit select.
<http://www.postgresql.org/docs/8.4/interactive/plpgsql-expressions.html>
IF EXISTS (SELECT 1 FROM t1 WHERE
volume_id = NEW.volume_id AND name = NEW.name) THEN
RAISE EXCEPTION '% already exists on volume', NEW.name;
END IF;
i don't know if that will help performance though
> Result (cost=8.35..8.36 rows=1 width=0) (actual time=0.019..0.020
> rows=1 loops=1)
> InitPlan 1 (returns $0)
> -> Index Scan using volume_id_idx on t1 (cost=0.00..8.35 rows=1
> width=0) (actual time=0.016..0.016 rows=0 loops=1)
> Index Cond: (volume_id = 300)
> Filter: (name = 'whodat'::text)
> Total runtime: 0.053 ms
> (6 rows)
I got 0.4ms the first time I tried this.
that's actual runtime is helped by cache locality, reconnect and try
it again and you'll see a worse figure
( I got a factor of three difference)
at 53us ir probably didn't hit the hard disk, when that starts
happening things will get much worse.
> 0.053 ms/record / 1,000 ms/sec * 10,000 records = .53 seconds
huh?
> According to that stat, this lookup should be adding about 0.5 seconds
> to 10,000 records,
why? what are you doing to 10000 records.
> far from the actual 38 seconds it is adding. I've
> tried to change up the query in the trigger to see if I could get
> different results with not much luck. Any idea what might be taking up
> the extra time or what I can do to troubleshoot?
Try it without that check (on sample data) and see how much faster it goes
(just comment out that part of the trigger)
when I tested that here I could not see a definate difference.
(timings were all over the place, some slower some faster)
how much disk (in bytes, and dollars) are you hoping to save by not
using the index.
--
⚂⚃ 100% natural
From | Date | Subject | |
---|---|---|---|
Next Message | Randall Smith | 2011-01-16 05:35:55 | Re: Trigger Performance |
Previous Message | Peter Geoghegan | 2011-01-16 03:33:42 | Re: How to generate unique invoice numbers for each day |