From: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net> |
---|---|
To: | Matthew Schumacher <matt(dot)s(at)aptalaska(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Problems with inconsistant query performance. |
Date: | 2006-09-27 20:31:55 |
Message-ID: | 20060927203155.GT19827@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Periodically taking longer is probably a case of some other process in
the database holding a lock you need, or otherwise bogging the system
down, especially if you're always running acctmessage from the same
connection (because the query plans shouldn't be changing then). I'd
suggest looking at what else is happening at the same time.
Also, it's more efficient to operate on chunks of data rather than one
row at a time whenever possible. If you have to log each row
individually, consider simply logging them into a table, and then
periodically pulling data out of that table to do additional processing
on it.
BTW, your detection of duplicates/row existance has a race condition.
Take a look at example 36-1 at
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
for a better way to handle it.
On Wed, Sep 27, 2006 at 10:37:22AM -0800, Matthew Schumacher wrote:
> List,
>
> I posted a little about this a while back to the general list, but never
> really got any where with it so I'll try again, this time with a little
> more detail and hopefully someone can send me in the right direction.
>
> Here is the problem, I have a procedure that is called 100k times a day.
> Most of the time it's screaming fast, but other times it takes a few
> seconds. When it does lag my machine can get behind which causes other
> problems, so I'm trying to figure out why there is such a huge delta in
> performance with this proc.
>
> The proc is pretty large (due to the number of vars) so I will summarize
> it here:
>
> ==========================================================================
> CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$
> BEGIN
> INSERT into tmpaccounting_tab ( ... ) values ( ... );
>
> IF _acctType = 'start' THEN
> BEGIN
> INSERT into radutmp_tab ( ... ) valuse ( ... );
> EXCEPTION WHEN UNIQUE_VIOLATION THEN
> NULL;
> END;
> ELSIF _acctType = 'stop' THEN
> UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =
> _userName;
> IF (NOT FOUND) THEN
> INSERT into radutmp_tab ( ... ) values ( ... );
> END IF;
>
> END IF;
> END;
> $$
> LANGUAGE plpgsql;
> ==========================================================================
>
> So in a nutshell, if I get an accounting record put it in the
> tmpaccounting_tab and then insert or update the radutmp_tab based on
> what kind of record it is. If for some reason the message is a start
> message and a duplicate, drop it, and if the message is a stop message
> and we don't have the start then insert it.
>
> The tmpaccounting_tab table doesn't have any indexes and gets flushed to
> the accounting_tab table nightly so it should have very good insert
> performance as the table is kept small (compared to accounting_tab) and
> doesn't have index overhead. The radutmp_tab is also kept small as
> completed sessions are flushed to another table nightly, but I do keep
> an index on sessionId and userName so the update isn't slow.
>
> Now that you have the layout, the problem: I log whenever a query takes
> more than 250ms and have logged this query:
>
> duration: 3549.229 ms statement: select acctMessage( 'stop',
> 'username', 'address', 'time', 'session', 'port', 'address', 'bytes',
> 'bytes', 0, 0, 1, 'reason', '', '', '', 'proto', 'service', 'info')
>
> But when I go do an explain analyze it is very fast:
>
> QUERY PLAN
> ------------------------------------------------------------------------------------
> Result (cost=0.00..0.03 rows=1 width=0) (actual time=6.812..6.813
> rows=1 loops=1)
> Total runtime: 6.888 ms
>
> So the question is why on a relatively simple proc and I getting a query
> performance delta between 3549ms and 7ms?
>
> Here are some values from my postgres.conf to look at:
>
> shared_buffers = 60000 # min 16 or max_connections*2,
> 8KB each
> temp_buffers = 5000 # min 100, 8KB each
> #max_prepared_transactions = 5 # can be 0 or more
> # note: increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> work_mem = 131072 # min 64, size in KB
> maintenance_work_mem = 262144 # min 1024, size in KB
> max_stack_depth = 2048 # min 100, size in KB
> effective_cache_size = 65536 # typically 8KB each
>
>
> Thanks for any help you can give,
> schu
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
From | Date | Subject | |
---|---|---|---|
Next Message | Graham Davis | 2006-09-27 20:56:32 | BUG #2658: Query not using index |
Previous Message | Bill Moran | 2006-09-27 20:27:56 | Re: autovacuum on a -mostly- r/o table |