From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: optimizer cost calculation problem |
Date: | 2003-04-01 00:13:45 |
Message-ID: | 20030401.091345.85415532.t-ishii@sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
> > Kenji Sugita has identified a problem with cost_sort() in costsize.c.
> > In the following code fragment, sortmembytes is defined as long. So
> > double nruns = nbytes / (sortmembytes * 2);
> > may cause an integer overflow if sortmembytes exceeds 2^30, which in
> > turn make optimizer to produce wrong query plan(this actually happned
> > in a large PostgreSQL installation which has tons of memory).
>
> I find it really really hard to believe that it's wise to run with
> sort_mem exceeding 2 gig ;-). Does that installation have so much
> RAM that it can afford to run multiple many-Gb sorts concurrently?
The process is assigned 1 gig sort mem to speed up a batch job by
uisng backend-process-only sort mem setting, and they do not modify
postgresql.conf for ordinaly user.
BTW it does not 2 gig, but 1 gig (remember that we do sortmembytes *
2) .
> This is far from being the only place that multiplies SortMem by 1024.
> My inclination is that a safer fix is to alter guc.c's entry for
> SortMem to establish a maximum value of INT_MAX/1024 for the variable.
>
> Probably some of the other GUC variables like shared_buffers ought to
> have overflow-related maxima established, too.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ed L. | 2003-04-01 01:06:35 | Re: index corruption? |
Previous Message | Kevin Brown | 2003-03-31 23:41:28 | Re: deadlock problem |