From: | Kelvin Lau <kelvin12(at)hku(dot)hk> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Connecton timeout issues and JDBC |
Date: | 2021-08-23 07:34:53 |
Message-ID: | 20d676e7-a2d4-da48-891e-0a1dd4e9048d@hku.hk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello psql community,
I have been using Python to deal with CRUD of the database. I have
discovered that there are some issues when dealing with long queries
(either SELECT or COPY, since it is somewhat big data). The connection
is dropped by the 2~3 hours mark and I have no idea what is wrong. There
is no knowledge on how my workstation is connected to the server.
But I managed to work around the issue by putting a few parameters in
psycopg2:
conn = psycopg2.connect(host=“someserver.hk”,
port=12345,
dbname=“ohdsi”,
user=“admin”,
password=“admin1”,
options="-c search_path="+schema,
# it seems the below lines are needed to keep the connection alive.
connect_timeout=10,
keepalives=1,
keepalives_idle=5,
keepalives_interval=2,
keepalives_count=5)
It looks like that few keepalives* parameter kept the connection alive
so the long queries can run day and night.
The problem now is that, I am forced to use R and JDBC to deal with a
bunch of codes, because there are a lot of analyses written in R. The
issue that a long query would be dropped around the 2~3 hours mark
showed up again in R/JDBC. How can I work around that?
I have tried putting /tcpKeepAlive=true /in the link but it seems to
have mixed results. Do I also have to put /tcp_keepalives_interval/ or
/tcp_keepalives_count/? What are some recommend values in these parameters?
Are there any other possible solutions?/
Thanks/
From | Date | Subject | |
---|---|---|---|
Next Message | Markhof, Ingolf | 2021-08-23 09:01:09 | Re: [E] Re: Regexp_replace bug / does not terminate on long strings |
Previous Message | Lucas | 2021-08-22 23:47:09 | Re: PostgreSQL 9.2 high replication lag |