stored procedure performance

From: Rick Gigger <rick(at)alpinenetworking(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: stored procedure performance
Date: 2005-12-14 02:38:56
Message-ID: E8715A31-EC27-43DA-AE4C-B9C0E8C43290@alpinenetworking.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I have a table that I populate with a stored procedure. When the
stored procedure runs it deletes the table and rebuilds the whole
thing from scratch. Initially it performs terribly but when I play
around with it for a while (I will describe this in a moment) it runs
very, very fast.

If I understand correctly when the stored procedure is first created
it creates query plans for each of the queries in the stored
procedure and caches them. I don't know what order loading the dump
does everything in but I'm guessing that when the stored proc is
created the stats (or lack of stats) present at that time are causing
it to create bad query plans.

So I have two questions:

1) How can I verify this? Is there any way to do an explain on
something running in a stored procedure? (I am using pl/pgsql)
2) What can I do about it? What actually fixes it is to run it a
while and made sure there is a bunch of data there (even if it is
deleted and not visible to anything) and run vacuum analyze. Then
recreate the stored procedure. Then run the stored procedure. Then
it goes very, very fast. So my only option now is to create the
database from the dump and then fix the stored proc manually. I
guess I could run a script to do this. Is there a way that I can
delay the creation of query plans till the database has some actual
data available to it to actually create good plans?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Michael Glaesemann 2005-12-14 03:30:59 Re: stored procedure performance
Previous Message Aaron Koning 2005-12-14 01:58:42 Re: Multi-row update w. plpgsql function