Re: query taking much longer since Postgres 8.4 upgrade

From: Tomas Vondra <tv(at)fuzzy(dot)cz>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: query taking much longer since Postgres 8.4 upgrade
Date: 2011-03-16 22:16:44
Message-ID: 4D8136CC.7010100@fuzzy.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

OK, so the cost constants are equal in both versions (the only
difference is due to change of the default value).

Just out of curiosity, have you tried to throw a bit more work_mem at
the query? Try something like 8MB or 16MB so - just do this

db=$ set work_mem=8192

and then run the query (the change is valid in that session only, the
other sessions will still use 1MB).

Most of the sorts was performed on-disk insted of in memory, and it
might result in better plan.

regards
Tomas

Dne 16.3.2011 22:40, Davenport, Julie napsal(a):
> Tomas,
> Here are the settings on the 8.0 side:
>
> srn_mst=# show cpu_index_tuple_cost;
> cpu_index_tuple_cost
> ----------------------
> 0.001
> (1 row)
>
> srn_mst=# show cpu_operator_cost;
> cpu_operator_cost
> -------------------
> 0.0025
> (1 row)
>
> srn_mst=# show cpu_tuple_cost;
> cpu_tuple_cost
> ----------------
> 0.01
> (1 row)
>
> srn_mst=# show random_page_cost;
> random_page_cost
> ------------------
> 4
> (1 row)
>
> srn_mst=# show seq_page_cost;
> ERROR: unrecognized configuration parameter "seq_page_cost"
>
> srn_mst=# show work_mem;
> work_mem
> ----------
> 1024
> (1 row)
>
> Here are the settings on the 8.4 side:
>
> srn_mst=# show cpu_index_tuple_cost;
> cpu_index_tuple_cost
> ----------------------
> 0.005
> (1 row)
>
> srn_mst=# show cpu_operator_cost;
> cpu_operator_cost
> -------------------
> 0.0025
> (1 row)
>
> srn_mst=# show cpu_tuple_cost;
> cpu_tuple_cost
> ----------------
> 0.01
> (1 row)
>
> srn_mst=# show random_page_cost;
> random_page_cost
> ------------------
> 4
> (1 row)
>
> srn_mst=# show seq_page_cost;
> seq_page_cost
> ---------------
> 1
> (1 row)
>
> srn_mst=# show work_mem;
> work_mem
> ----------
> 1MB
> (1 row)
>
> Thanks,
> Julie
>
>
>
> -----Original Message-----
> From: Tomas Vondra [mailto:tv(at)fuzzy(dot)cz]
> Sent: Wednesday, March 16, 2011 4:23 PM
> To: pgsql-general(at)postgresql(dot)org
> Cc: Davenport, Julie
> Subject: Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade
>
> Dne 16.3.2011 21:38, Davenport, Julie napsal(a):
>> OK, I did the explain analyze on both sides (using a file for output instead) and used the tool you suggested.
>>
>> 8.0 - http://explain.depesz.com/s/Wam
>> 8.4 - http://explain.depesz.com/s/asJ
>
> Great, that's exactly what I asked for. I'll repost that to the mailing
> list so that the others can check it too.
>
>> When I run the queries I get 59,881 rows on the 8.0 side and 59,880 on the 8.4 side, which is what I expect because 8.4 side was updated a couple hours later and some minor changes make sense.
>
> Hm, obviously both versions got the row estimates wrong, but the 8.4
> difference (200x) is much bigger that the 8.0 (10x). This might be one
> of the reasons why a different plan is chosen.
>
> Anyway both versions underestimate the course_control subquery, as they
> believe there will be 1 row only, but in reality there's 2882 of them :-(
>
>
>> After your first email I did a vacuum full analyze on the 8.4 side on each of the tables in the schema that the views cover, then ran the query again, and it took even longer - up from 397,857 ms to 412,862 ms. Another query that was taking 597248 ms before the vacuum/analyze took 617526 ms after. I don't understand why, but this is generally the experience we've had with vacuum/analyze on these particular tables. We do large numbers of deletes and inserts to them every day, so I would think they would benefit from it.
>
> OK, so the tables were in a quite good shape - not bloated etc. The
> slight increase is negligible I guess, the vacuum probably removed the
> data from shared buffers or something like that.
>
>> I did do a vacuum full analyze on instr_as_stutemp before the explain analyze, as you suggested.
>
> OK, now the row estimate is correct
>
> Seq Scan on instr_as_stutemp (cost=0.00..1.04 rows=4 width=9) (actual
> time=0.052..0.098 rows=4 loops=1)
>
>> I will consider the indexes and do some benchmark testing (I have considered the 'drop-load-reindex' trick in the past). I'm sure increasing maintenance_work_mem will help.
>
> OK. But the question why the plan changed this way still remains
> unanswered (or maybe it does and I don't see it).
>
> One thing I've noticed is this difference in estimates:
>
> 8.0:
> ====================================================================
> Subquery Scan course_control (cost=9462700.13..9492043.53 rows=1
> width=32) (actual time=43368.204..45795.239 rows=2882 loops=1)
> * Filter: (((to_char(course_begin_date, 'YYYYMMDD'::text) =
> '20100412'::text) OR (to_char(course_begin_date, 'YYYYMMDD'::text) =
> ...
> '20110307'::text)) AND ((course_delivery)::text ~~ 'O%'::text) AND
> (course_cross_section IS NULL))
>
> 8.4:
> ====================================================================
> Subquery Scan course_control (cost=18710.12..548966.51 rows=1 width=32)
> (actual time=1632.403..4438.949 rows=2882 loops=1)
> * Filter: ((course_control.course_cross_section IS NULL) AND
> ((course_control.course_delivery)::text ~~ 'O%'::text) AND
> (to_char(course_control.course_begin_date, 'YYYYMMDD'::text) = ANY
> ('{20100412,20100510,...,20110110,20110207,20110307}'::text[])))
>
> I.e. both verions use seqscan, both estimate the same number of rows
> (incorrectly), yet the estimated cost is very different (9492043 vs.
> 548966).
>
> Maybe the cost estimation really changed between 8.0 and 8.4, but just
> for sure - what are the cost values? I mean what is set for those config
> values:
>
> cpu_index_tuple_cost
> cpu_operator_cost
> cpu_tuple_cost
> random_page_cost
> seq_page_cost
> work_mem
>
> (use 'show' to see the actual value).
>
> regards
> Tomas

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Daniele Varrazzo 2011-03-16 22:26:04 Re: Saving bytes in custom data type
Previous Message Adrian Klaver 2011-03-16 21:42:10 Re: Can't get a simple COPY to work