From: | Ogden <lists(at)darkstatic(dot)com> |
---|---|
To: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance |
Date: | 2011-04-12 18:28:14 |
Message-ID: | 8F22D592-23C1-4A3C-94A5-48363332ADD3@darkstatic.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
> Dne 12.4.2011 19:23, Ogden napsal(a):
>>
>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
>>
>>> Ogden <lists(at)darkstatic(dot)com> wrote:
>>>
>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3
>>>> server at work and granted, there's more activity on the production server, but
>>>> the same queries take twice as long on the beefier server than my mac at home.
>>>> I have pasted what I have changed in postgresql.conf - I am wondering if
>>>> there's any way one can help me change things around to be more efficient.
>>>>
>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram
>>>>
>>>> Heavy write and read (for reporting and calculations) server.
>>>>
>>>> max_connections = 350
>>>> shared_buffers = 4096MB
>>>> work_mem = 32MB
>>>> maintenance_work_mem = 512MB
>>>
>>> That's okay.
>>>
>>>
>>>>
>>>>
>>>> seq_page_cost = 0.02 # measured on an arbitrary scale
>>>> random_page_cost = 0.03
>>>
>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost
>>> and random_page_cost) are completly wrong.
>>>
>>
>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I
>> find by raising them to:
>>
>> seq_page_cost = 1.0
>> random_page_cost = 3.0
>> cpu_tuple_cost = 0.3
>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005
>> #cpu_operator_cost = 0.0025 # same scale as above
>> effective_cache_size = 8192MB
>>
>> That this is better, some queries run much faster. Is this better?
>
> I guess it is. What really matters with those cost variables is the
> relative scale - the original values
>
> seq_page_cost = 0.02
> random_page_cost = 0.03
> cpu_tuple_cost = 0.02
>
> suggest that the random reads are almost as expensive as sequential
> reads (which usually is not true - the random reads are significantly
> more expensive), and that processing each row is about as expensive as
> reading the page from disk (again, reading data from disk is much more
> expensive than processing them).
>
> So yes, the current values are much more likely to give good results.
>
> You've mentioned those values were recommended on this list - can you
> point out the actual discussion?
>
>
Thank you for your reply.
http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...
Ogden
From | Date | Subject | |
---|---|---|---|
Next Message | Strange, John W | 2011-04-12 18:50:26 | Re: Linux: more cores = less concurrency. |
Previous Message | Tomas Vondra | 2011-04-12 18:16:26 | Re: Performance |