Re: Do work_mem and shared buffers have 1g or 2g limit on 64 bit linux?

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Do work_mem and shared buffers have 1g or 2g limit on 64 bit linux?
Date: 2015-06-15 14:57:53
Message-ID: 557EE7F1.9000901@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 06/15/15 05:44, Kaijiang Chen wrote:
> I've checked the source codes in postgresql 9.2.4. In function
> static bool
> grow_memtuples(Tuplesortstate *state)
>
> the codes:
> /*
> * On a 64-bit machine, allowedMem could be high enough to get us into
> * trouble with MaxAllocSize, too.
> */
> if ((Size) (state->memtupsize * 2) >= MaxAllocSize / sizeof(SortTuple))
> return false;
>
> Note that MaxAllocSize == 1GB - 1
> that means, at least for sorting, it uses at most 1GB work_mem! And
> setting larger work_mem has no use at all...

That's not true. This only limits the size of 'memtuples' array, which
only stores pointer to the actual tuple, and some additional data. The
tuple itself is not counted against MaxAllocSize directly. The SortTuple
structure has ~24B which means you can track 33M tuples in that array,
and the tuples may take a lot more space.

regards

--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tomas Vondra 2015-06-17 14:47:28 Re: PATCH: adaptive ndistinct estimator v4
Previous Message Merlin Moncure 2015-06-15 14:25:24 Re: Slow query: Postgres chooses nested loop over hash join, whery by hash join is much faster, wrong number of rows estimated