From: | Kasia Tuszynska <ktuszynska(at)esri(dot)com> |
---|---|
To: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | setting pg memory consumption on windows |
Date: | 2009-09-14 20:15:40 |
Message-ID: | 232B5217AD58584C87019E8933556D11014695D992@redmx2.esri.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hello Postgres Admins,
I am having a bit of trouble setting postgres memory parameters on a windows machine. I figured out the parameters based on postgres documentation as well as the whack a mole slide deck, but the postgres process will not start. Event log keeps indicating the following reason for the postgres process : FATAL: could not create shared memory segment
I can not find any documentation specific to windows...
Environment:
2008 Windows server, 64bit
20 gigs of ram (I am assuming 2gigs for os, 3 gigs for other apps that are running on the machine, which leaves me with 15gigs of memory I could give to Postgres)
I would like to give postgres as much memory as possible as this cluster is hosting very large data, by large I mean that it took over 20 hours to load the data and postgres complained about running out of max_fsm_pages and checkpoint_segments when trying to create a primary key.
I found that to figure out the value for the max_fsm_pages parameter I should run vacuumdb -v which came back with a value of 599,584 as opposed to the default 204800
I set the parameters with the following values, which failed:
Effective_cache_size - 8 gigs - rec: 50%-66% of total ram
Shared_buffers - 4 gigs - rec: 25%-33% of total ram
Work_mem - 128MB - rec: from whack a mole presentation
Maintenance_work_memory - 256MB - rec: from whack a mole presentation
Checkpoint_segmetns - 256MB - rec: from whack a mole presentation
Max_fsm_pages - 600,000 - rec: from running the vacuumdb command, which I rounded up
After that failed, I tried setting shared_buffers and effective_cache_size to 2 gigs each, remembering that on a 32 bit windows system a process can only utilize 2gig address space, but that was a shot in the dark as this is a 64bit machine so the 32bit limitation should not apply, of course that implies an assumption that each of these parameters would be utilized by a different background process, which may be erroneous.
Does anyone have any recommendations or corrections?
Thank you,
Kasia
From | Date | Subject | |
---|---|---|---|
Next Message | David Schnur | 2009-09-14 20:19:18 | Re: Does pg_dump have a 2GB file-size limit? |
Previous Message | Tom Lane | 2009-09-14 17:53:53 | Re: Does pg_dump have a 2GB file-size limit? |