From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Devrim GÜNDÜZ <devrim(at)gunduz(dot)org> |
Cc: | Peter Eisentraut <peter_e(at)gmx(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Frederik Ramm <frederik(at)remote(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: using a lot of maintenance_work_mem |
Date: | 2011-02-20 14:32:02 |
Message-ID: | 201102201432.p1KEW2s24368@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Devrim GNDZ wrote:
> On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
> >
> > > But before expending time on that, I'd want to see some evidence
> > that
> > > it's actually helpful for production situations. I'm a bit dubious
> > > that you're going to gain much here.
> >
> > If you want to build an index on a 500GB table and you have 1TB RAM,
> > then being able to use >>1GB maintenance_work_mem can only be good,
> > no?
>
> That would also probably speed up Slony (or similar) replication engines
> in initial replication phase. I know that I had to wait a lot while
> creating big indexes on a machine which had enough ram.
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2011-02-20 14:41:49 | Re: work_mem / maintenance_work_mem maximums |
Previous Message | Kevin Grittner | 2011-02-20 13:49:29 | Re: Update PostgreSQL shared memory usage table for 9.0? |