From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Krzysztof Plocharz <plocharz(at)9livesdata(dot)com> |
Cc: | Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Planning performance problem (67626.278ms) |
Date: | 2019-04-08 15:07:04 |
Message-ID: | CAFj8pRCUnUCZWP9=g5bkZWp71Ffn5hpDzF973qRMSatizJm-jw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
po 8. 4. 2019 v 16:55 odesílatel Krzysztof Plocharz <plocharz(at)9livesdata(dot)com>
napsal:
>
>
> On 2019/04/08 16:42, Justin Pryzby wrote:
> > On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:
> >> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <
> plocharz(at)9livesdata(dot)com> napsal:
> >>
> >>> We have some very strange query planning problem. Long story short it
> >>> takes 67626.278ms just to plan. Query execution takes 12ms.
> >>>
> >>> Query has 7 joins and 2 subselects.
> >>> It looks like the issue is not deterministic, sometimes is takes few ms
> >>> to plan the query.
> >>>
> >>> One of the tables has 550,485,942 live tuples and 743,504,012 dead
> >>> tuples. Running ANALYZE on that tables solves the problem only
> temporarily.
> >>>
> >>> Question is how can we debug what is going on?
> >>
> >> please check your indexes against bloating. Planner get min and max from
> >> indexes and this operation is slow on bloat indexes.
>
> Yes, we thought about this, there are over 700,000,000 dead tuples. But
> as you said, it should not result in 67 second planning...
>
> >
> > I think that's from get_actual_variable_range(), right ?
> >
> > If it's due to bloating, I think the first step would be to 1) vacuum
> right
> > now; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET
> > (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).
> >
>
> We did pgrepack and it did help, but is it possible for
> get_actual_variable_range to take over 60 seconds?
> Is there any other workaround for this except for pgrepack/vacuum?
>
> Anyway to actually debug this?
>
you can use perf and get a profile.
https://wiki.postgresql.org/wiki/Profiling_with_perf
> > What version postgres server ?
> >
> > Justin
> >
> >
>
>
>
>
> On 2019/04/08 16:33, Pavel Stehule wrote:>
> >
> > po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz
> > <plocharz(at)9livesdata(dot)com <mailto:plocharz(at)9livesdata(dot)com>> napsal:
> >
> > Hi
> >
> > We have some very strange query planning problem. Long story short
> it
> > takes 67626.278ms just to plan. Query execution takes 12ms.
> >
> > Query has 7 joins and 2 subselects.
> > It looks like the issue is not deterministic, sometimes is takes
> few ms
> > to plan the query.
> >
> > One of the tables has 550,485,942 live tuples and 743,504,012 dead
> > tuples. Running ANALYZE on that tables solves the problem only
> > temporarily.
> >
> > Question is how can we debug what is going on?
> >
> >
> > please check your indexes against bloating. Planner get min and max from
> > indexes and this operation is slow on bloat indexes.
> >
> Yes, we thought about this, there are over 700,000,000 dead tuples. But
> as you said, it should not result in 67 second planning...
>
> > but 67 sec is really slow - it can be some other other problem - it is
> > real computer or virtual?
> >
> real, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of
> ram. During testing system was mostly idle.
>
>
> >
> > Best Regards,
> > Krzysztof Płocharz
> >
> >
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Gunther | 2019-04-08 15:09:07 | Block / Page Size Optimization |
Previous Message | Krzysztof Plocharz | 2019-04-08 14:58:55 | Re: Planning performance problem (67626.278ms) |