From: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net> |
---|---|
To: | Matthew Schumacher <matt(dot)s(at)aptalaska(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Problems with inconsistant query performance. |
Date: | 2006-09-27 21:42:22 |
Message-ID: | 20060927214221.GA19827@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Sep 27, 2006 at 01:33:09PM -0800, Matthew Schumacher wrote:
> Jim,
>
> Thanks for the help. I went and looked at that example and I don't see
> how it's different than the "INSERT into radutmp_tab" I'm already doing.
> Both raise an exception, the only difference is that I'm not doing
> anything with it. Perhaps you are talking about the "IF (NOT FOUND)" I
> put after the "UPDATE radutmp_tab". Should this be an EXCEPTION
> instead? Also I don't know how this could cause a race condition. As
> far as I understand each proc is run in it's own transaction, and the
> code in the proc is run serially. Can you explain more why this could
> case a race?
It can cause a race if another process could be performing those same
inserts or updates at the same time.
I know the UPDATE case can certainly cause a race. 2 connections try to
update, both hit NOT FOUND, both try to insert... only one will get to
commit.
I think that the UNIQUE_VIOLATION case should be safe, since a second
inserter should block if there's another insert that's waiting to
commit.
DELETEs are something else to think about for both cases.
If you're certain that only one process will be performing DML on those
tables at any given time, then what you have is safe. But if that's the
case, I'm thinking you should be able to group things into chunks, which
should be more efficient.
> Thanks,
> schu
>
>
>
> Jim C. Nasby wrote:
> > Periodically taking longer is probably a case of some other process in
> > the database holding a lock you need, or otherwise bogging the system
> > down, especially if you're always running acctmessage from the same
> > connection (because the query plans shouldn't be changing then). I'd
> > suggest looking at what else is happening at the same time.
> >
> > Also, it's more efficient to operate on chunks of data rather than one
> > row at a time whenever possible. If you have to log each row
> > individually, consider simply logging them into a table, and then
> > periodically pulling data out of that table to do additional processing
> > on it.
> >
> > BTW, your detection of duplicates/row existance has a race condition.
> > Take a look at example 36-1 at
> > http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
> > for a better way to handle it.
>
> >> ==========================================================================
> >> CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$
> >> BEGIN
> >> INSERT into tmpaccounting_tab ( ... ) values ( ... );
> >>
> >> IF _acctType = 'start' THEN
> >> BEGIN
> >> INSERT into radutmp_tab ( ... ) valuse ( ... );
> >> EXCEPTION WHEN UNIQUE_VIOLATION THEN
> >> NULL;
> >> END;
> >> ELSIF _acctType = 'stop' THEN
> >> UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =
> >> _userName;
> >> IF (NOT FOUND) THEN
> >> INSERT into radutmp_tab ( ... ) values ( ... );
> >> END IF;
> >>
> >> END IF;
> >> END;
> >> $$
> >> LANGUAGE plpgsql;
> >> ==========================================================================
>
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Schumacher | 2006-09-27 22:17:23 | Re: Problems with inconsistant query performance. |
Previous Message | Matthew Schumacher | 2006-09-27 21:33:09 | Re: Problems with inconsistant query performance. |