| From: | Robert Schnabel <schnabelr(at)missouri(dot)edu> |
|---|---|
| To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
| Cc: | pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: Allow sorts to use more available memory |
| Date: | 2011-09-12 22:09:18 |
| Message-ID: | 4E6E830E.30701@missouri.edu |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On 9/12/2011 3:58 PM, Scott Marlowe wrote:
> On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel
> <schnabelr(at)missouri(dot)edu> wrote:
>> The recent "data warehouse" thread made me think about how I use work_mem
>> for some of my big queries. So I tried SET work_mem = '4GB' for a session
>> and got
>>
>> ERROR: 4194304 is outside the valid range for parameter "work_mem" (64 ..
>> 2097151)
> Ubuntu 10.10, pgsql 8.4.8:
>
> smarlowe=# set work_mem='1000GB';
> SET
Ok, so is this a limitation related to the Windows implementation?
And getting back to the to-do list entry and reading the related posts,
it appears that even if you could set work_mem that high it would only
use 2GB anyway. I guess that was the second part of my question. Is
that true?
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Robert Schnabel | 2011-09-12 22:21:58 | Re: Allow sorts to use more available memory |
| Previous Message | Robert Klemme | 2011-09-12 21:26:10 | Re: Postgres for a "data warehouse", 5-10 TB |