From: | Jennifer Trey <jennifer(dot)trey(at)gmail(dot)com> |
---|---|
To: | "Massa, Harald Armin" <chef(at)ghum(dot)de>, Bill Moran <wmoran(at)potentialtech(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Now I am back, next thing. Final PGS tuning. |
Date: | 2009-04-08 16:05:26 |
Message-ID: | 863606ec0904080905l1e1aceebm88530689713297bd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
max_connections = 150 # A comprimise :)
effective_cache_size = 2048MB # Old value 439MB --> Even older : 128MB
#Is this too high?
maintenance_work_mem = 96MB # Old 16MB. Would 64MB be better? Updates
and therefore re-indexing of tuples happens quite frequently.
work_mem = 3MB
# Old was 1MB!? That is too low.
# Scott you mentioned an example with 1 GB. I guess this is the work
memory to work on per user query to sort, join and so on. I will be
doing those things quite often.
# After all, if I understand the concept correctly, it will only use
it if needs too, otherwise performance will take a hit.
# Scott, you say that I might need to change this later on when I have
several gigs of data. But will it hurt when I don't?
# I think 4-8MB should be enough and relativly safe to start with. I
am scared of going higher. But 1MB is low.
shared_buffer = 1024MB # Kept it
random_page_cost = 3 # I have pretty fast disks.
wal_buffers = 1024KB
Scott, you mentioned :
You can also use the pg_stat_all_indexes table to look at index scans
vs. tuples being read, this can sometimes hint at index 'bloat'. I
would also recommend pg_stattuple which has a pg_statindex function
for looking at index fragmentation.
From where can I see these stats ? Is there any graphic tool?
Thanks all / Jennifer
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Crawford | 2009-04-08 16:15:26 | Re: Table has 22 million records, but backup doesn't see them |
Previous Message | Kevin Grittner | 2009-04-08 16:04:57 | Re: SOLVED: tsearch2 dictionary for statute cites |