Re: two memory-consuming postgres processes

From: Alexy Khrabrov <deliverable(at)gmail(dot)com>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: two memory-consuming postgres processes
Date: 2008-05-02 19:38:37
Message-ID: 27A4D379-2553-496D-A96C-102F2A5E62C2@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:

> On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov
> <deliverable(at)gmail(dot)com> wrote:
>> Greetings -- I have an UPDATE query updating a 100 million row
>> table, and
>> allocate enough memory via shared_buffers=1500MB. However, I see two
>> processes in top, the UPDATE process eating about 850 MB and the
>> writer
>> process eating about 750 MB. The box starts paging. Why is there
>> the
>> writer taking almost as much space as the UPDATE, and how can I
>> shrink it?
>
> Shared_buffers is NOT the main memory pool for all operations in
> pgsql, it is simply the buffer pool used to hold data being operated
> on.
>
> Things like sorts etc. use other memory and can exhaust your machine.
> However, I'd like to see the output of vmstat 1 or top while this is
> happening.
>
> How much memory does this machine have?

It's a 2GB RAM MacBook. Here's the top for postgres

Processes: 117 total, 2 running, 6 stuck, 109 sleeping... 459
threads
12
:34:27
Load Avg: 0.27, 0.24, 0.32 CPU usage: 8.41% user, 11.06% sys,
80.53% idle
SharedLibs: num = 15, resident = 40M code, 2172K data, 3172K
linkedit.
MemRegions: num = 20719, resident = 265M + 12M private, 1054M shared.
PhysMem: 354M wired, 1117M active, 551M inactive, 2022M used, 19M
free.
VM: 26G + 373M 1176145(160) pageins, 1446482(2) pageouts

PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD
RSIZE VSIZE
51775 postgres 6.8% 2:40.16 1 9 39 1504K 896M 859M+
1562M
51767 postgres 0.0% 0:39.74 1 8 28 752K 896M 752M
1560M

the first is the UPDATE, the second is the writer.

The query is very simple,

netflix=> create index movs_mid_idx on movs(mid);
CREATE INDEX
netflix=> update ratings set offset1=avg-rating from movs where
mid=movie_id;

where the table ratings has about 100 million rows, movs has about
20,000.

I randomly increased values in postgresql.conf to

shared_buffers = 1500MB
max_fsm_pages = 2000000
max_fsm_relations = 10000

Should I set the background writer parameters somehow to decrease the
RAM consumed by the writer?

Cheers,
Alexy

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2008-05-02 19:53:56 Re: two memory-consuming postgres processes
Previous Message Scott Marlowe 2008-05-02 19:30:38 Re: two memory-consuming postgres processes