From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Ioana Danes <ioanasoftware(at)yahoo(dot)ca> |
Cc: | Igor Neyman <ineyman(at)perceptron(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Running out of memory on vacuum |
Date: | 2013-05-14 22:16:38 |
Message-ID: | CAOR=d=1s6gDG6OjnQ=-MpW4A4DAR89RsvsuPNyW+JTwGWtq1=w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Meant to add: I'd definitely be looking at using pgbouncer if you can
to pool locally. Makes a huge difference in how the machine behaves
should things go badly (i.e. it starts to slow down and connections
want to pile up)
On Tue, May 14, 2013 at 4:15 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Tue, May 14, 2013 at 11:25 AM, Ioana Danes <ioanasoftware(at)yahoo(dot)ca> wrote:
>> I agree and I will do.
>> Now let me ask you this. How much memory would be decent you put on a server with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) at pick times? I know more should be considered when taking such decision but I would like to know your point of view at a first sight...
>
> 2000 users running a transaction every 4 seconds each is 2000/4 tps or
> 500 tps. 500 tps is no big deal for most servers with a decent RAID
> array and battery backed controller or running on a single SSD. Memory
> wise if you need to have a connection open and just waiting for the
> next transaction, you'll need ~6MB free per connection for the basic
> backend, plus extra memory for sorts etc. Let's say 10MB. Double that
> for a fudge factor. Times 2000. That's 4GB just to hold all that state
> in memory. After that you want maint work mem, shared buffers and then
> add all that up and double it so the OS can do a lot of caching. So,
> I'd say look at going to at least 16G. Again, I'd fudge factor that to
> 32G just to be sure.
>
> I have built servers that held open ~1000 connections, most idle but
> persistent on 8 core 32G machines with 16 drives in a RAID controller
> with a battery back RAID that were plenty fast in that situation. 32G
> is pretty darned cheap, assuming your server can hold that much
> memory. If it can hold more great, if it's not too much look at 64G
> and more. How big is your data store? The more of it you can fit in
> kernel cache the better. If you're dealing with a 10G database great,
> if it's 500GB then try to get as much memory as possible up to 512GB
> or so into that machine.
>
> On Tue, May 14, 2013 at 3:32 PM, John R Pierce wrote:
>
>> how many 100s of CPU cores do you have to execute those 1000+ concurrent transactions?
>
> I think you're misreading the OP's post. 2000 clients running a
> transaction every 4 seconds == 500 tps. With an SSD my laptop could do
> that with 16G RAM probably.
--
To understand recursion, one must first understand recursion.
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2013-05-14 22:17:42 | Re: How to convert numbers into words in postgresql |
Previous Message | Scott Marlowe | 2013-05-14 22:15:44 | Re: Running out of memory on vacuum |