| From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
|---|---|
| To: | firerox(at)centrum(dot)cz |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: slow pg_connect() |
| Date: | 2008-03-24 07:58:16 |
| Message-ID: | 47E75F18.9060206@postnewspapers.com.au |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
firerox(at)centrum(dot)cz wrote:
> It takes more then 0.05s :(
>
> Only this function reduce server speed max to 20request per second.
>
If you need that sort of frequent database access, you might want to
look into:
- Doing more work in each connection and reducing the number of
connections required;
- Using multiple connections in parallel;
- Pooling connections so you don't need to create a new one for every job;
- Using a more efficient database connector and/or language;
- Dispatching requests to a persistent database access provider that's
always connected
However, your connections are indeed taking a long time. I wrote a
trivial test using psycopg for Python and found that the following script:
#!/usr/bin/env python
import psycopg
conn = pyscopg.connect("dbname=testdb")
generally took 0.035 seconds (350ms) to run on my workstation -
including OS process creation, Python interpreter startup, database
interface loading, connection, disconnection, and process termination.
A quick timing test shows that the connection/disconnection can be
performed 100 times in 1.2 seconds:
import psycopg
import timeit
print timeit.Timer('conn = psycopg.connect("dbname=craig")', 'import
psycopg').timeit(number=100);
... and this is still with an interpreted language. I wouldn't be too
surprised if much better again could be achieved with the C/C++ APIs,
though I don't currently feel the desire to write a test for that.
--
Craig Ringer
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Craig Ringer | 2008-03-24 08:04:23 | Re: slow pg_connect() |
| Previous Message | firerox | 2008-03-24 07:40:15 | slow pg_connect() |