From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance |
Date: | 2011-04-12 21:09:54 |
Message-ID: | 4DA4BFA2.5060601@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Dne 12.4.2011 20:28, Ogden napsal(a):
>
> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
>
>> Dne 12.4.2011 19:23, Ogden napsal(a):
>>>
>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
>>>
>>>> Ogden <lists(at)darkstatic(dot)com> wrote:
>>>>
>>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3
>>>>> server at work and granted, there's more activity on the production server, but
>>>>> the same queries take twice as long on the beefier server than my mac at home.
>>>>> I have pasted what I have changed in postgresql.conf - I am wondering if
>>>>> there's any way one can help me change things around to be more efficient.
>>>>>
>>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram
>>>>>
>>>>> Heavy write and read (for reporting and calculations) server.
>>>>>
>>>>> max_connections = 350
>>>>> shared_buffers = 4096MB
>>>>> work_mem = 32MB
>>>>> maintenance_work_mem = 512MB
>>>>
>>>> That's okay.
>>>>
>>>>
>>>>>
>>>>>
>>>>> seq_page_cost = 0.02 # measured on an arbitrary scale
>>>>> random_page_cost = 0.03
>>>>
>>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost
>>>> and random_page_cost) are completly wrong.
>>>>
>>>
>>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I
>>> find by raising them to:
>>>
>>> seq_page_cost = 1.0
>>> random_page_cost = 3.0
>>> cpu_tuple_cost = 0.3
>>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005
>>> #cpu_operator_cost = 0.0025 # same scale as above
>>> effective_cache_size = 8192MB
>>>
>>> That this is better, some queries run much faster. Is this better?
>>
>> I guess it is. What really matters with those cost variables is the
>> relative scale - the original values
>>
>> seq_page_cost = 0.02
>> random_page_cost = 0.03
>> cpu_tuple_cost = 0.02
>>
>> suggest that the random reads are almost as expensive as sequential
>> reads (which usually is not true - the random reads are significantly
>> more expensive), and that processing each row is about as expensive as
>> reading the page from disk (again, reading data from disk is much more
>> expensive than processing them).
>>
>> So yes, the current values are much more likely to give good results.
>>
>> You've mentioned those values were recommended on this list - can you
>> point out the actual discussion?
>>
>>
>
> Thank you for your reply.
>
> http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...
>
OK, what JD said there generally makes sense, although those values are
a bit extreme - in most cases it's recommended to leave seq_page_cost=1
and decrease the random_page_cost (to 2, the dafault value is 4). That
usually pushes the planner towards index scans.
I'm not saying those small values (0.02 etc.) are bad, but I guess the
effect is about the same and it changes the impact of the other cost
variables (cpu_tuple_cost, etc.)
I see there is 16GB of RAM but shared_buffers are just 4GB. So there's
nothing else running and the rest of the RAM is used for pagecache? I've
noticed the previous discussion mentions there are 8GB of RAM and the DB
size is 7GB (so it might fit into memory). Is this still the case?
regards
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Ogden | 2011-04-12 21:19:32 | Re: Performance |
Previous Message | Strange, John W | 2011-04-12 18:50:26 | Re: Linux: more cores = less concurrency. |