From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | Igor Chudov <ichudov(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres for a "data warehouse", 5-10 TB |
Date: | 2011-09-12 02:29:26 |
Message-ID: | 20110912022926.GP12765@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
* Claudio Freire (klaussfreire(at)gmail(dot)com) wrote:
> I don't think you'd want that. Remember, work_mem is the amount of
> memory *per sort*.
> Queries can request several times that much memory, once per sort they
> need to perform.
>
> You can set it really high, but not 60% of your RAM - that wouldn't be wise.
Oh, I dunno.. It's only used by the planner, so sometimes you have to
bump it up, especially when PG thinks the number of rows returned from
something will be a lot more than it really will be. :)
/me has certain queries where it's been set to 100GB... ;)
I agree that it shouldn't be the default, however. That's asking for
trouble. Do it for the specific queries that need it.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | J Sisson | 2011-09-12 02:46:57 | Re: Databases optimization |
Previous Message | Stephen Frost | 2011-09-12 02:28:06 | Re: Postgres for a "data warehouse", 5-10 TB |