Re: Thousands of schemas and ANALYZE goes out of memory

From: "Hugo <Nabble>" <hugo(dot)tech(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Thousands of schemas and ANALYZE goes out of memory
Date: 2012-10-02 17:38:38
Message-ID: 1349199518994-5726351.post@n5.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> Why 32 bits? Is that what your hardware is?

The business started in 2005 and we have been using 32 bits since then. We
have several machines, each with a remote replica databases (WAL shipping)
configured and changing this to 64 bits is going to be a lot of work, let
alone the down time of each server (pg_dump + pg_restore). But we will
probably do this in the future after we finish some priorities.

> That might be the problem. I think with 32 bits, you only 2GB of
> address space available to any given process, and you just allowed
> shared_buffers to grab all of it.

The address space for 32 bits is 4Gb. We just tried to reach a balance in
the configuration and it seems to be working (except for the ANALYZE command
when the number of schemas/tables is huge).

Some questions I have:

1) Is there any reason to run the ANALYZE command in a single transaction?
2) Is there any difference running the ANALYZE in the whole database or
running it per schema, table by table?

Thanks for all the help,
Hugo

--
View this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-of-schemas-and-ANALYZE-goes-out-of-memory-tp5726198p5726351.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message ChoonSoo Park 2012-10-02 18:03:22 Re: How to search for composite type array
Previous Message David Johnston 2012-10-02 16:36:10 Re: Re: Explicitly inserting NULL values into NOT NULL DEFAULT 0 columns