From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Cc: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Flavio Henrique Araque Gurgel <flavio(at)4linux(dot)com(dot)br> |
Subject: | Re: work_mem in high transaction rate database |
Date: | 2009-03-04 09:16:07 |
Message-ID: | 200903041016.09765.dfontaine@hi-media.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
On Wednesday 04 March 2009 02:37:42 Scott Marlowe wrote:
> If some oddball query really needs a lot of work_mem,
> and benchmarks show something larger work_mem helps, consider raising
> the work_mem setting for that one query to something under 1G (way
> under 1G) That makes it noticeably faster. Don't allocate more than a
> test shows you helps.
The probably easiest way to integrate this into an existing application is
this way, in my experience:
BEGIN;
SET LOCAL work_mem TO '650MB';
SELECT -- the query requiring such a large setting
COMMIT;
Right after the commit the global configured work_mem (or the previous
session's one, in fact) will be in effect, you won't have to reset it yourself.
Regards,
--
dim
From | Date | Subject | |
---|---|---|---|
Next Message | Flavio Henrique Araque Gurgel | 2009-03-04 12:46:27 | Re: work_mem in high transaction rate database |
Previous Message | Akos Gabriel | 2009-03-04 07:04:57 | Re: work_mem in high transaction rate database |