From: | Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "desmodemone *EXTERN*" <desmodemone(at)gmail(dot)com> |
Cc: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: keep alive and running query |
Date: | 2013-12-16 13:39:27 |
Message-ID: | A737B7A37273E048B164557ADEF4A58B17C8402A@ntex2010i.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
desmodemone wrote:
> By the way, the problem, as I show , it's for DWH or similar environment where it's usually to have a
> long running query (with sort / group by / hash , so before the first rows
> is returned, it needs time) and in those cases , if the client will die, the backend will run for a
> lot of time before the backend will try to write to the socket.
>
> So it's possible to have a heavy load in a database server with a lot of death connections where the
> backends are still working as described.
>
> I think it's not a minor problem, no?
No, it isn't.
But if that happens on a routine basis, I would argue that the client
program is at fault. It should cancel the query and close the session
when the user interrupts or closes it.
Admitted, it cannot do that if the client machine crashes, the network
connection dies or the user kills the program with SIGKILL, but I'd say
that this should not happen on a regular basis.
I don't know how easy it would be to add code for the server to check
the state of the network sockets regularly.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Campbell, Lance | 2013-12-16 22:42:34 | pg dump only indexes |
Previous Message | desmodemone | 2013-12-16 13:16:04 | Re: keep alive and running query |