From: | Carlos Henrique Reimer <carlos(dot)reimer(at)opendb(dot)com(dot)br> |
---|---|
To: | Vick Khera <vivek(at)khera(dot)org> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Out of memory condition |
Date: | 2014-12-11 19:05:00 |
Message-ID: | CAJnnue1GeaWX1XuBmP=+bTKgjhNxFFLNfZBOdzG3jMiHHosJag@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera <vivek(at)khera(dot)org> wrote:
>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>> needed to hold relcache entries for all 23000 tables :-(. If so there
>> may not be any easy way around it, except perhaps replicating subsets
>> of the tables. Unless you can boost the memory available to the backend
>>
>
> I'd suggest this. Break up your replication into something like 50 sets of
> 500 tables each, then add one at a time to replication, merging it into the
> main set. Something like this:
>
> create & replicate set 1.
> create & replicate set 2.
> merge 2 into 1.
> create & replicate set 3.
> merge 3 into 1.
>
> repeat until done. this can be scripted.
>
> Given you got about 50% done before it failed, maybe even 4 sets of 6000
> tables each may work out.
>
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos(dot)reimer(at)opendb(dot)com(dot)br
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2014-12-11 19:19:50 | Re: Out of memory condition |
Previous Message | Vick Khera | 2014-12-11 18:42:09 | Re: Out of memory condition |