Re: Any way to speed this up?

From: "Joel Fradkin" <jfradkin(at)wazagua(dot)com>
To: "'Tom Lane'" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "'PostgreSQL Perform'" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Any way to speed this up?
Date: 2005-04-07 16:33:46
Message-ID: 006701c53b8f$9274fdf0$797ba8c0@jfradkin
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

shared_buffers = 8000 # min 16, at least max_connections*2, 8KB
each
work_mem = 8192#1024 # min 64, size in KB
max_fsm_pages = 30000 # min max_fsm_relations*16, 6 bytes each
effective_cache_size = 40000 #1000 # typically 8KB each
random_page_cost = 1.2#4 # units are one sequential page
fetch cost

These are the items I changed.
In the development box I turned random page cost to .2 because I figured it
would all be faster using an index as all my data is at a minimum being
selected by clientnum.

But the analyze I sent in is from these settings above on a windows box.
If I was running the analyze (pgadmin) on a windows box but connecting to a
linux box would the times be accurate or do I have to run the analyze on the
linux box for that to happen?

I am a little unclear why I would need an index on associate by location as
I thought it would be using indexes in location and jobtitle for their
joins.
I did not say where locationid = x in my query on the view.
I have so much to learn about SQL.
Joel

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2005-04-07 16:42:48 Re: Any way to speed this up?
Previous Message Richard_D_Levine 2005-04-07 16:28:29 Re: How to improve db performance with $7K?