From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching |
Date: | 2014-12-12 16:50:56 |
Message-ID: | 548B1CF0.9010808@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12.12.2014 14:19, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
>
>> Regarding the "sufficiently small" - considering today's hardware, we're
>> probably talking about gigabytes. On machines with significant memory
>> pressure (forcing the temporary files to disk), it might be much lower,
>> of course. Of course, it also depends on kernel settings (e.g.
>> dirty_bytes/dirty_background_bytes).
>
> Well, this is sort of one of the problems with work_mem. When we
> switch to a tape sort, or a tape-based materialize, we're probably far
> from out of memory. But trying to set work_mem to the amount of
> memory we have can easily result in a memory overrun if a load spike
> causes lots of people to do it all at the same time. So we have to
> set work_mem conservatively, but then the costing doesn't really come
> out right. We could add some more costing parameters to try to model
> this, but it's not obvious how to get it right.
Ummm, I don't think that's what I proposed. What I had in mind was a
flag "the batches are likely to stay in page cache". Because when it is
likely, batching is probably faster (compared to increased load factor).
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-12-12 17:13:04 | Re: moving from contrib to bin |
Previous Message | Andres Freund | 2014-12-12 16:45:12 | Re: moving from contrib to bin |