| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
| Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: execGrouping.c limit on work_mem |
| Date: | 2017-05-28 17:49:08 |
| Message-ID: | 21217.1495993748@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> In BuildTupleHashTable
> /* Limit initial table size request to not more than work_mem */
> nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) / entrysize));
> Is this a good idea? If the caller of this code has no respect for
> work_mem, they are still going to blow it out of the water. Now we will
> just do a bunch of hash-table splitting in the process. That is only going
> to add to the pain.
It looks perfectly reasonable to me. The point I think is that the caller
doesn't have to be very careful about calculating its initial request
size.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2017-05-28 18:03:26 | Re: PostgreSQL 10 changes in exclusion constraints - did something change? CASE WHEN behavior oddity |
| Previous Message | Tom Lane | 2017-05-28 17:22:27 | Re: Allow GiST opcalsses without compress\decompres functions |