From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: hash aggregation |
Date: | 2012-10-12 21:28:18 |
Message-ID: | 50788B72.9090502@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 11.10.2012 17:15, Korisk wrote:
> "IOS scan" ?
> Index Only Scan
>
> What's your seq_page_cost and random_page_cost?
>
> hashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;
> name | setting | reset_val
> -------------------------+----------------+-----------
> archive_command | (disabled) |
> enable_bitmapscan | off | on
> enable_indexscan | off | on
> enable_seqscan | off | on
> log_file_mode | 0600 | 384
> random_page_cost | 0.1 | 4
> seq_page_cost | 0.1 | 1
> transaction_isolation | read committed | default
> unix_socket_permissions | 0777 | 511
> (9 rows)
>
> Postgresql 9.2.1 was configured and built with default settings.
>
> Thank you.
Hi,
so how much RAM does the system have? Because if you're using the
default shared buffers size (32MB IIRC), that's the first thing you
should bump up. It's usually recommended to set it to ~25% of RAM, but
not more than ~10GB. Set also the work_mem and maintenance_work_mem,
depending on the amount of RAM you have.
Then set effective_cache_size to 75% of RAM (this is just a hint to the
planner, it won't really allocate memory).
Restart the database and try the queries again. Don't run them with
EXPLAIN ANALYZE because that adds overhead that may easily make some of
the queries much slower.
It's great to see the estimates and actual row counts, but for timing
queries it's a poor choice (even the TIMING OFF added in 9.2 is not
exactly overhead-free). Maybe this is what made the seqscan look much
slower?
I usually run them from psql like this
\o /dev/null
\timing on
SELECT ...
which gives me more reliable timing results (especially when executed
right on the server).
Only if all this tuning fails, it's time to fine-tune the knobs, i.e.
the cost variables. Please, don't change the seq_page_cost, always keep
it at 1.0 and change only the other values.
For example if everything fits into the RAM, you may change the
random_page_cost to 1.5 or lower (I'd never recommend to set it lower
than seq_page_cost), and then you may start tuning the cpu_* costs.
But please, this is the last thing you should do - tune the server
properly first. There's even a very nice wiki page about tuning
PostgreSQL servers:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
regards
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Sergio Gabriel Rodriguez | 2012-10-12 22:35:06 | Re: problems with large objects dump |
Previous Message | Tomas Vondra | 2012-10-12 21:14:35 | Re: hash aggregation |