From: | Ericson Smith <eric(at)did-it(dot)com> |
---|---|
To: | Alex Madon <alex(dot)madon(at)bestlinuxjobs(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: postgresql + apache under heavy load |
Date: | 2004-01-21 17:00:02 |
Message-ID: | 400EB012.2090707@did-it.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Could be problem be that PHP is not using connection efficiently?
Apache KeepAlive with PHP, is a dual edged sword with you holding the
blade :-)
If I am not mistaken, what happens is that a connection is kept alive
because Apache believes that other requests will come in from the client
who made the initial connection. So 10 concurrent connections are fine,
but they are not released timely enough with 100 concurrent connections.
The system ends up waiting around for other KeepAlive connections to
timeout before Apache allows others to come in. We had this exact
problem in an environment with millions of impressions per day going to
the database. Because of the nature of our business, we were able to
disable KeepAlive and the load immediately dropped (concurrent
connection on the Postgresql database also dropped sharply). We also
turned off PHP persistent connections to the database.
The drawback is that connections are built up and torn down all the
time, and with Postgresql, it is sort of expensive. But thats a fraction
of the expense of having KeepAlive on.
Warmest regards,
Ericson Smith
Tracking Specialist/DBA
+-----------------------+--------------------------------------+
| http://www.did-it.com | "Crush my enemies, see then driven |
| eric(at)did-it(dot)com | before me, and hear the lamentations |
| 516-255-0500 | of their women." - Conan |
+-----------------------+--------------------------------------+
Alex Madon wrote:
> Hello,
> I am testing a web application (using the DBX PHP function to call a
> Postgresql backend).
> I have 375Mb RAM on my test home box.
> I ran ab (apache benchmark) to test the behaviour of the application
> under heavy load.
> When increasing the number of requests, all my memory is filled, and
> the Linux server begins to cache and remains frozen.
>
> ab -n 100 -c 10 http://localsite/testscript
> behaves OK.
>
> If I increases to
> ab -n 1000 -c 100 http://localsite/testscript
> I get this memory problem.
>
> If I eliminate the connection to the (UNIX) socket of Postgresql, the
> script behaves well even under very high load (and of course with much
> less time spent per request).
>
> I tried to change some parameters in postgresql.conf
> max_connections = 32
> to max_connections = 8
>
> and
>
> shared_buffers = 64
> to shared_buffers = 16
>
> without success.
>
> I tried to use pmap on httpd and postmaster Process ID but don't get
> much help.
>
> Does anybody have some idea to help to debug/understand/solve this
> issue? Any feedback is appreciated.
> To me, it would not be a problem if the box is very slow under heavy
> load (DoS like), but I really dislike having my box out of service
> after such a DoS attack.
> I am looking for a way to limit the memory used by postgres.
>
> Thanks
> Alex
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if
> your
> joining column's datatypes do not match
>
Attachment | Content-Type | Size |
---|---|---|
eric.vcf | text/x-vcard | 315 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | lnd | 2004-01-21 17:24:43 | Re: tablespaces a priority for 7.5? |
Previous Message | Richard Huxton | 2004-01-21 16:53:26 | Re: postgresql + apache under heavy load |