From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Corey Taylor <corey(dot)taylor(dot)fl(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: postgres 9.6: insert into select finishes only in pgadmin not psql |
Date: | 2019-09-23 13:57:17 |
Message-ID: | 26981.1569247037@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Corey Taylor <corey(dot)taylor(dot)fl(at)gmail(dot)com> writes:
> I found after testing other situations, that the psql command would always
> finish as expected after canceling the first query that ran too long. I
> was able to reproduce this scenario with psql and pgadmin4 with various
> combinations.
Well, that's just weird.
It's well known that the second run of a query can be much faster due
to having fully-populated caches to draw on, but you seem to have a
case that may go beyond that. Maybe check for waiting on a lock?
It'd be useful to look in pg_stat_activity and/or top(1) while the
initial query is running, to see if it seems to be eating CPU or
is blocked on some condition. (I forget how thorough the
wait_event coverage is in 9.6, but it does at least have those
columns.)
Can you create a self-contained test case that acts like this?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Pankaj Jangid | 2019-09-23 14:07:02 | Re: How to represent a bi-directional list in db? |
Previous Message | Luca Ferrari | 2019-09-23 13:25:06 | Re: unable to drop index because it does not exists |