Re: Performance tuning on RedHat Enterprise Linux 3

From: Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Neil Conway <neilc(at)samurai(dot)com>
Cc: Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>, Paul Tillotson <pntil(at)shentel(dot)net>, David Esposito <pgsql-general(at)esposito(dot)newnetco(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Performance tuning on RedHat Enterprise Linux 3
Date: 2004-12-07 22:00:28
Message-ID: 5.2.1.1.1.20041207201144.02e14fa0@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

--=======67E74690=======
Content-Type: text/plain; x-avg-checked=avg-ok-39E01936; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 8bit

But isn't the problem when the planner screws up and not the sortmem setting?

There was my case where the 7.4 planner estimated 1500 distinct rows when
there were actually 1391110. On 7.3.4 it used about 4.4MB. Whereas 7.4
definitely used more than 400MB for the same query ) - I had to kill
postgresql - didn't wait for it to use more. That's a lot more than 200%.
Maybe 3x sort_mem is too low, but at least by default keep it below server
RAM/number of backends or something like that.

Even if the planner has improved a lot if cases like that still occur from
time to time it'll be a lot better for stability/availability if there's a
limit.

Doubt if I still have the same data to test on 8.0.

Link.

At 12:35 AM 12/7/2004 -0500, Tom Lane wrote:
>Neil Conway <neilc(at)samurai(dot)com> writes:
> > As a quick hack, what about throwing away the constructed hash table and
> > switching to hashing for sorting if we exceed sort_mem by a significant
> > factor? (say, 200%) We might also want to print a warning message to the
> > logs.
>
>If I thought that a 200% error in memory usage were cause for a Chinese
>fire drill, then I'd say "yeah, let's do that". The problem is that the
>place where performance actually goes into the toilet is normally an
>order of magnitude or two above the nominal sort_mem setting (for
>obvious reasons: admins can't afford to push the envelope on sort_mem
>because of the various unpredictable multiples that may apply). So
>switching to a hugely more expensive implementation as soon as we exceed
>some arbitrary limit is likely to be a net loss not a win.
>
>If you can think of a spill methodology that has a gentle degradation
>curve, then I'm all for that. But I doubt there are any quick-hack
>improvements to be had here.
>
> regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly

--=======67E74690=======--

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stan Y 2004-12-07 22:02:27 Measure the CPU Time per Transaction
Previous Message Jimmie H. Apsey 2004-12-07 21:12:01 Install pgsql directory