Re: Tuning Postgres for single user manipulating large amounts of data

From: tv(at)fuzzy(dot)cz
To: "Reid Thompson" <Reid(dot)Thompson(at)ateb(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Tuning Postgres for single user manipulating large amounts of data
Date: 2010-12-10 13:35:31
Message-ID: 2e6229cad881cc9f69461fd998add072.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> Hi ( sorry for the double posting, thought Id use the wrong email
> address but both have been posted anyway). As far as the db is concerned
> Im just reading data then writing the data to a lucene search index (which
> is outside of the database) , but my labtop is jut a test machine I want
> to run the same code on production. Why would work_mem not be safe at 64MB
> if I have 2GB of memory - what have I got to be careful of.
>
> Paul

Well, the problem with work_mem is that it's 'per operation' so a query
may actually need several work_mem segments. And it's not just sorting, a
query containing a hash join, hash aggregate and a sort may consume up to
3x work_mem memory.

And if you have a lot of concurrent users running such queries, you may
easily run out of memory - in that case the feared OOM killer comes and
kills one of the processes (usually postmaster, which means the database
goes bottoms up). Not sure how OOM works on MacOS.

But as you said there will be single user running queries on the database,
you can set the work_mem limit pretty high. Depends on the queries though
- a complicated query may consume a lot of memory.

Tomas

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephen Hutchings 2010-12-10 14:21:10 Checking for data changes across a very large table
Previous Message Reid Thompson 2010-12-10 13:21:23 Re: Tuning Postgres for single user manipulating large amounts of data