| From: | "Kevin Grittner" <kgrittn(at)mail(dot)com> |
|---|---|
| To: | ktm(at)rice(dot)edu, "Böckler Andreas" <andy(at)boeckler(dot)org> |
| Cc: | "Jeff Janes" <jeff(dot)janes(at)gmail(dot)com>,pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Query-Planer from 6seconds TO DAYS |
| Date: | 2012-10-26 15:30:05 |
| Message-ID: | 20121026153006.306910@gmx.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
ktm(at)rice(dot)edu wrote:
> You have the sequential_page_cost = 1 which is better than or equal
> to the random_page_cost in all of your examples. It sounds like you
> need a sequential_page_cost of 5, 10, 20 or more.
The goal should be to set the cost factors so that they model actual
costs for you workload in your environment. In what cases have you
seen the sequential scan of a large number of adjacent pages from
disk take longer than randomly reading the same number of pages from
disk? (I would love to see the bonnie++ number for that, if you have
them.)
-Kevin
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alberto Marchesini | 2012-10-26 15:30:23 | BAD performance with enable_bitmapscan = on with Postgresql 9.0.X (X = 3 and 10) |
| Previous Message | Böckler Andreas | 2012-10-26 15:15:05 | Re: Query-Planer from 6seconds TO DAYS |