From: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Martijn van Oosterhout" <kleptog(at)svana(dot)org> |
Cc: | "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Automatically setting work_mem |
Date: | 2006-03-21 23:00:08 |
Message-ID: | C045C578.1FB52%llonergan@greenplum.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-patches |
Tom,
On 3/21/06 2:47 PM, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I'm fairly unconvinced about Simon's underlying premise --- that we
> can't make good use of work_mem in sorting after the run building phase
> --- anyway. If we cut back our memory usage then we'll be forcing a
> significantly more-random access pattern to the temp file(s) during
> merging, because we won't be able to pre-read as much at a time.
I thought we let the OS do that ;-)
Seriously, I've suggested an experiment to evaluate the effectiveness of
internal buffering with ridiculously low amounts of RAM (work_mem) compared
to bypassing it entirely and preferring the buffer cache and OS I/O cache.
I suspect the work_mem caching of merge results, while algorithmically
appropriate, may not work effectively with the tiny amount of RAM allocated
to it, and could be better left to the OS because of it's liberal use of
read-ahead and disk caching.
Experiment should take but a minute to validate or disprove the hypothesis.
- Luke
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-03-21 23:06:30 | Re: Automatically setting work_mem |
Previous Message | Tom Lane | 2006-03-21 22:55:09 | Re: [GENERAL] A real currency type |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-03-21 23:06:30 | Re: Automatically setting work_mem |
Previous Message | Tom Lane | 2006-03-21 22:47:00 | Re: Automatically setting work_mem |