Re: Out of memory

From: John R Pierce <pierce(at)hogranch(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Out of memory
Date: 2011-04-05 19:27:27
Message-ID: 4D9B6D1F.9050406@hogranch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 04/05/11 2:50 AM, Jeremy Palmer wrote:
> I've been having repeated troubles trying to get a PostgreSQL app to play nicely on Ubuntu. I recently posted a message on this list about an out of memory error and got a resolution by reducing the work_mem setting. However I'm now getting further out of memory issues during the same stage of plpgsql function as mentioned before.
>
> The function itself is run as part of larger transaction which does the following:
>
> 1/ Maintains 104 tables (15 PostGIS tables), by loading and applying incremental table changes. A typical incremental load with maintain about 10,000 rows.
>
> 2/ When each one of these tables is updated an after trigger is fired that maintains an associated table revision table.
>
> 3/ After all of the tables are maintained a plpgsql function is called to build/maintain a set of de-normalised tables. These tables total about 20GB. Each one of these tables is compared against the previous table revision to determine its row changes. It's in this function that the out of memory exception is occurring.

a few random questions...

Does that all really have to be a single transaction?

Do you really need to use triggers for your revision tracking, and can't
rely on your daily update cycle to manually set the revision information?

Is it really necessary to generate massive denormalized tables, rather
than using view's to join the data?

> shared_buffers = 512MB
> maintenance_work_mem = 512MB
> temp_buffers = 256MB
> work_mem = 1MB
> wal_buffers = 16MB
> effective_cache_size = 4094MB
>
> The size of the database is 350GB. The typical number of users connected to the database is 1 or 2. This database is used for loading external data, managing revision table information and generating and outputting de-normalised datasets, so it does not have a high number of transactions running. Typically 1 large one per day.

with only 1-2 connections, you certainly could increase the work_mem.
Alternately, this single giant transaction could manually set a larger
work_mem which would only apply to it. Personally, given your 8gb
system and what you've described, I think I'd set the tuning parameters
something like...

shared_buffers = 1GB
maintenance_work_mem = 128MB
temp_buffers = 64MB
work_mem = 16MB
wal_buffers = 16MB
effective_cache_size = 4094MB

adjust effective_cache_size to somewhat less than the 'cached' value
shown in `free -m` after your system has been running for awhile.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Perry Smith 2011-04-05 22:02:34 unique amount more than one table
Previous Message rihad 2011-04-05 19:22:31 Re: Named advisory locks