From: | Ioana Danes <ioanasoftware(at)yahoo(dot)ca> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Running out of memory on vacuum |
Date: | 2013-05-16 13:27:02 |
Message-ID: | 1368710822.25379.YahooMailNeo@web164603.mail.gq1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello Scott,
I will look into using pgbouncer at a point. For now I will try to increase the memory. From practice I see that 16GB it is not enough unless I lower max_connections to 200. I have another production server with 16 GB and it is stable if the connections open are less than 200, once it is crossing that limit it has the same pb at vacuum. One problem I think is that the connection pooling we are using (dbcp) is configured with min idle connections to huge number so it keeps all the connections (300) to the db open. I will change that to 25-40.
So I have another question if you don't mind. How much memory is used on <IDLE> connections? I thought that if the system is quiet then very little should be used for IDLE connections, but apparently I am wrong. Do you have any documentation you can point me to so I can educate myself on this subject?
I had the system running on a test environment for the whole night and in the morning I stopped all the apache clients so the db is quiet but I still kept the connections open. There are 297 idle connection.
free shows:
total used free shared buffers cached
Mem: 16793380 13999196 2794184 0 256108 7656180
-/+ buffers/cache: 6086908 10706472
Swap: 4194300 0 4194300
I stopped the application server so all the connections were closed.
free shows:
total used free shared buffers cached
Mem: 16793380 8408604 8384776 0 256372 7558216
-/+ buffers/cache: 594016 16199364
Swap: 4194300 0 4194300
From this
From this I see 5GB in 297 connections. Is this normal behavior?
I am living in Ottawa so I hope I will see you at pgcon.
Thanks a lot for your reply,
Ioana
----- Original Message -----
From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Ioana Danes <ioanasoftware(at)yahoo(dot)ca>
Cc: Igor Neyman <ineyman(at)perceptron(dot)com>; PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Sent: Tuesday, May 14, 2013 6:16:38 PM
Subject: Re: [GENERAL] Running out of memory on vacuum
Meant to add: I'd definitely be looking at using pgbouncer if you can
to pool locally. Makes a huge difference in how the machine behaves
should things go badly (i.e. it starts to slow down and connections
want to pile up)
On Tue, May 14, 2013 at 4:15 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Tue, May 14, 2013 at 11:25 AM, Ioana Danes <ioanasoftware(at)yahoo(dot)ca> wrote:
>> I agree and I will do.
>> Now let me ask you this. How much memory would be decent you put on a server with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) at pick times? I know more should be considered when taking such decision but I would like to know your point of view at a first sight...
>
> 2000 users running a transaction every 4 seconds each is 2000/4 tps or
> 500 tps. 500 tps is no big deal for most servers with a decent RAID
> array and battery backed controller or running on a single SSD. Memory
> wise if you need to have a connection open and just waiting for the
> next transaction, you'll need ~6MB free per connection for the basic
> backend, plus extra memory for sorts etc. Let's say 10MB. Double that
> for a fudge factor. Times 2000. That's 4GB just to hold all that state
> in memory. After that you want maint work mem, shared buffers and then
> add all that up and double it so the OS can do a lot of caching. So,
> I'd say look at going to at least 16G. Again, I'd fudge factor that to
> 32G just to be sure.
>
> I have built servers that held open ~1000 connections, most idle but
> persistent on 8 core 32G machines with 16 drives in a RAID controller
> with a battery back RAID that were plenty fast in that situation. 32G
> is pretty darned cheap, assuming your server can hold that much
> memory. If it can hold more great, if it's not too much look at 64G
> and more. How big is your data store? The more of it you can fit in
> kernel cache the better. If you're dealing with a 10G database great,
> if it's 500GB then try to get as much memory as possible up to 512GB
> or so into that machine.
>
> On Tue, May 14, 2013 at 3:32 PM, John R Pierce wrote:
>
>> how many 100s of CPU cores do you have to execute those 1000+ concurrent transactions?
>
> I think you're misreading the OP's post. 2000 clients running a
> transaction every 4 seconds == 500 tps. With an SSD my laptop could do
> that with 16G RAM probably.
--
To understand recursion, one must first understand recursion.
--
Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Ryan Kelly | 2013-05-16 13:35:19 | Re: problem with lost connection while running long PL/R query |
Previous Message | Shaun Thomas | 2013-05-16 13:01:45 | Re: Tuning read ahead |