From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Neil Conway <neilc(at)samurai(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>, Paul Tillotson <pntil(at)shentel(dot)net>, David Esposito <pgsql-general(at)esposito(dot)newnetco(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Performance tuning on RedHat Enterprise Linux 3 |
Date: | 2004-12-07 05:35:19 |
Message-ID: | 10681.1102397719@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Neil Conway <neilc(at)samurai(dot)com> writes:
> As a quick hack, what about throwing away the constructed hash table and
> switching to hashing for sorting if we exceed sort_mem by a significant
> factor? (say, 200%) We might also want to print a warning message to the
> logs.
If I thought that a 200% error in memory usage were cause for a Chinese
fire drill, then I'd say "yeah, let's do that". The problem is that the
place where performance actually goes into the toilet is normally an
order of magnitude or two above the nominal sort_mem setting (for
obvious reasons: admins can't afford to push the envelope on sort_mem
because of the various unpredictable multiples that may apply). So
switching to a hugely more expensive implementation as soon as we exceed
some arbitrary limit is likely to be a net loss not a win.
If you can think of a spill methodology that has a gentle degradation
curve, then I'm all for that. But I doubt there are any quick-hack
improvements to be had here.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Cox | 2004-12-07 05:57:48 | Google not carrying the pgsql.* hierarchy. |
Previous Message | Jaime Casanova | 2004-12-07 05:31:51 | migrating from informix |