From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | b8flowerfire <b8flowerfire(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller |
Date: | 2014-06-10 13:53:17 |
Message-ID: | CA+TgmobvSZ92m3SVZwidNXAAgjVp=FAOJhFRdJt_e=T_DQ_JjQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jun 9, 2014 at 11:09 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Keep in mind that that standard advice is meant for all-in-memory cases,
> not for cases where the alternative to running with longer hash chains
> is dumping tuples out to disk and reading them back.
Sure, but that doesn't help someone who sets work_mem to some very
large value precisely to ensure that the hash join will be done in
memory. They still don't get the benefit of a smaller NTUP_PER_BUCKET
setting.
> I'm quite prepared to believe that we should change NTUP_PER_BUCKET ...
> but appealing to standard advice isn't a good basis for arguing that.
> Actual performance measurements (in both batched and unbatched cases)
> would be a suitable basis for proposing a change.
Well, it's all in what scenario you test, right? If you test the case
where something overflows work_mem as a result of the increased size
of the bucket array, it's always going to suck. And if you test the
case where that doesn't happen, it's likely to win. I think Stephen
Frost has already done quite a bit of testing in this area, on
previous threads. But there's no one-size-fits-all solution.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-06-10 14:00:05 | Re: [bug fix] Memory leak in dblink |
Previous Message | Alvaro Herrera | 2014-06-10 13:48:42 | Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby |