From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Adi Alurkar <adi(at)sf(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Dump/Restore performance improvement |
Date: | 2004-09-05 17:07:43 |
Message-ID: | 29496.1094404063@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Adi Alurkar <adi(at)sf(dot)net> writes:
> 1) Add a new config paramter e.g work_maintanence_max_mem this will
> the max memory postgresql *can* claim if need be.
> 2) During the dump phase of the DB postgresql estimates the
> "work_maintenance_mem" that would be required to create the index in
> memory(if possible) and add's a
> SET work_maintenance_mem="the value calculated" (IF this value is less
> than work_maintanence_max_mem. )
This seems fairly pointless to me. How is this different from just
setting maintenance_work_mem as large as you can stand before importing
the dump?
Making any decisions at dump time seems wrong to me in the first place;
pg_dump should not be expected to know what conditions the restore will
be run under. I'm not sure that's what you're proposing, but I don't
see what the point is in practice. It's already the case that
maintenance_work_mem is treated as the maximum memory you can use,
rather than what you will use even if you don't need it all.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Pierre-Frédéric Caillaud | 2004-09-05 18:01:30 | Re: fsync vs open_sync |
Previous Message | Geoffrey | 2004-09-05 11:41:29 | Re: fsync vs open_sync |