From: | Vick Khera <vivek(at)khera(dot)org> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Out of memory condition |
Date: | 2014-12-11 18:42:09 |
Message-ID: | CALd+dcc5ZCoGPo5JyMy5ys=N0kN5m12xGaz0kzktRo2okZp9Kw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> needed to hold relcache entries for all 23000 tables :-(. If so there
> may not be any easy way around it, except perhaps replicating subsets
> of the tables. Unless you can boost the memory available to the backend
>
I'd suggest this. Break up your replication into something like 50 sets of
500 tables each, then add one at a time to replication, merging it into the
main set. Something like this:
create & replicate set 1.
create & replicate set 2.
merge 2 into 1.
create & replicate set 3.
merge 3 into 1.
repeat until done. this can be scripted.
Given you got about 50% done before it failed, maybe even 4 sets of 6000
tables each may work out.
From | Date | Subject | |
---|---|---|---|
Next Message | Carlos Henrique Reimer | 2014-12-11 19:05:00 | Re: Out of memory condition |
Previous Message | Carlos Henrique Reimer | 2014-12-11 17:41:27 | Re: Out of memory condition |