From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tim Kane <tim(dot)kane(at)gmail(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Avoiding out of date statistics / planner |
Date: | 2020-02-12 15:15:59 |
Message-ID: | 23751.1581520559@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tim Kane <tim(dot)kane(at)gmail(dot)com> writes:
> Every now and again, I will encounter an unexplained long-running query.
> It’s a head scratcher moment, because this query that is still running for
> 20 minutes (not blocking) can be run independently in about 500ms
Without some kind of context (like, have you been doing something to
the table(s) involved that would drastically change their statistics)
it's hard to comment on this. It's not obvious from the info
provided that this is a bad-plan issue rather than something else.
> On the application side, we can explicitly issue a VACUUM ANALYZE after
> each bulk operation - and often that is precisely what happens..
> But - I am keenly aware that this cannot be performed within a transaction.
Plain ANALYZE can be, and that's all you need if the problem is to
update stats.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Dmitry Igrishin | 2020-02-12 15:45:25 | Natural sort order extension. |
Previous Message | Tom Lane | 2020-02-12 15:09:01 | Re: pg_basebackup connection closed unexpectedly... |