From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: performance of IN (subquery) |
Date: | 2004-08-28 08:00:34 |
Message-ID: | 87vff3lmjh.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> > Except that the first thing the job does is delete all the old records. This
> > is inside a transaction. So an estimate based on the heap size would be off by
> > a factor of two by the time the job is done.
>
> Could you use TRUNCATE? I dunno if locking the table is okay for you.
> It is transaction safe though.
Well, if necessary I could, but if I can do it without downtime all the
better. In any case I think I'll be ok with a factor of 2 misestimation. I was
just giving an example use case for you to chew on when analyzing this new
proposal.
I'm not sure where I stand with the idea. I like the idea that table sizes
would always be fairly reasonable even without statistics. But I also have a
really strong desire for plan stability.
> [ shrug ] You can get a sudden degradation with fixed plans, too. All
> it takes is an addition of a lot of rows in some table that had been
> small.
Well, presumably I should be aware if my data distribution is changing
drastically. That's under my control. At least the performance change will be
proportionate to the distribution change.
With plans changing on the fly I could have a query that degrades 1% for every
row added and then suddenly becomes 10x slower when I add a 17th extra row. Of
course such a system isn't perfectly tuned, or the optimizer issue should be
found and fixed. But I would rather find out about it without having my
application fail.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2004-08-28 08:02:15 | Re: Which Order Vacuum Full Analyze Cluster Reindex? |
Previous Message | Mike Nolan | 2004-08-28 06:05:58 | Regression errors on beta1? |