From: | Patrik Kudo <kudo(at)pingpong(dot)net> |
---|---|
To: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: index usage (and foreign keys/triggers) |
Date: | 2003-02-27 10:45:00 |
Message-ID: | 3E5DEC2C.4070801@pingpong.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
scott.marlowe wrote:
>
>
> Postgresql is being smart, just not smart enough.
>
> Imagine that one of your queries was to delete 99.9% of all the tuples.
> Would an index scan help then? Of course not, since you're going to visit
> nearly every row in the database.
>
> the planner uses several settings to try and figure out the cost of
> sequentially scanning a table versus index access, and it doesn't always
> get things right.
>
> Take a look at random_page_cost. It defaults to 4, which means that
> postgresql will make it's decisions on index versus seq scan assuming that
> random individual pages cost 4 times as much to get as a sequential scan
> that just happens to include them.
>
> On most modern machines the difference in cost is very low, what with disk
> caching and all. This is especially true for smaller tables that can fit
> in memory. Once a table's in buffer memory, along with it's index, random
> page cost will be about 1. I.e. a seq scan in memory or an index op are
> both quite fast. In fact, it is possible that at this point, a random
> page access for certain percentages of your table will cost you LESS than
> 1 in practice because linear access in memory yields little if any gain
> over random access. The only overhead is reading the index blocks.
>
> So, try tuning your random page cost down to somewhere between 1.0 and 2.0
> for best performance on these kinds of things. Our database at work runs
> on a machine with 1.5 gig ram, of which 800 megs is cache, and postgresql
> has 256 meg shared buffer. It generally only hits the drives about 5% of
> the reads, so random page cost for us is set to 1.2 and works well.
Thanks for a good explanation! However, a setting lower than 2 seems a
bit scary for me though. Our databases are quite large due to many large
objects, in some cases around 4Gb, so all the data couldn't possible be
cached all the time. The most frequently accessed tables however are
fewer and smaller and would probably easily fit into the 1-2Gb RAM
(that's the span we usually have on the servers).
Any top of mind suggestions or reflections on tuning strategies? ;)
> Welcome to the wonderful world of performance tuning...
It's a jungle, but I love it! =)
Regards,
Patrik Kudo
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Rust | 2003-02-27 11:10:46 | Re: 7.4? |
Previous Message | Patrik Kudo | 2003-02-27 10:30:47 | Re: index usage (and foreign keys/triggers) |