From: | "Steinar H(dot) Gunderson" <sgunderson(at)bigfoot(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Materializing a sequential scan |
Date: | 2005-10-23 10:23:33 |
Message-ID: | 20051023102333.GA9356@samfundet.no |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Oct 20, 2005 at 12:37:25AM -0400, Tom Lane wrote:
> That mdb_gruppekobling_transitiv_tillukning function looks awfully
> grotty ... how many rows does it return, and how long does it take to
> run by itself? How often does its temp table get vacuumed? A quick
> band-aid might be to use TRUNCATE instead of DELETE FROM to clean the
> table ... but if I were you I'd try to rewrite the function entirely.
I've verified that it indeed does use 20ms more for every run without a
VACUUM, but it shouldn't really matter -- and I guess it will go away once
somebody teaches plpgsql about not caching OIDs for CREATE TEMPORARY TABLE.
:-)
In any case, I still can't understand why it picks the plan it does; what's
up with the materialized seqscan, and where do the four million rows come
from? 7.4 estimates ~52000 disk page fetches for the same query, so surely
there must be a better plan than four million :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Craig A. James | 2005-10-23 16:31:44 | Re: Need help in setting optimal configuration for a huge |
Previous Message | Dennis Bjorklund | 2005-10-23 10:04:07 | Re: Need help in setting optimal configuration for a huge |