From: | ivan babrou <ibobrik(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Millisecond-precision connect_timeout for libpq |
Date: | 2013-07-08 05:44:32 |
Message-ID: | CANWdNRDaiNo7vgfQFkrMdC4kvmzSdyoBG=Qaj0S4gm_sZeZa2g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 5 July 2013 23:47, ivan babrou <ibobrik(at)gmail(dot)com> wrote:
> On 5 July 2013 23:26, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> ivan babrou <ibobrik(at)gmail(dot)com> writes:
>>> If you can figure out that postgresql is overloaded then you may
>>> decide what to do faster. In our app we have very strict limit for
>>> connect time to mysql, redis and other services, but postgresql has
>>> minimum of 2 seconds. When processing time for request is under 100ms
>>> on average sub-second timeouts matter.
>>
>> If you are issuing a fresh connection for each sub-100ms query, you're
>> doing it wrong anyway ...
>>
>> regards, tom lane
>
> In php you cannot persist connection between requests without worrying
> about transaction state. We don't use postgresql for every sub-100ms
> query because it can block the whole request for 2 seconds. Usually it
> takes 1.5ms to connect, btw.
>
> Can you tell me why having ability to specify more accurate connect
> timeout is a bad idea?
>
> --
> Regards, Ian Babrou
> http://bobrik.name http://twitter.com/ibobrik skype:i.babrou
Nobody answered my question yet.
--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou
From | Date | Subject | |
---|---|---|---|
Next Message | kenji uno | 2013-07-08 06:21:09 | Re: How to implement Gin method? |
Previous Message | Jaime Casanova | 2013-07-08 04:17:49 | Re: in-catalog Extension Scripts and Control parameters (templates?) |