From: | Ted Toth <txtoth(at)gmail(dot)com> |
---|---|
To: | Rob Sargent <robjsargent(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: large numbers of inserts out of memory strategy |
Date: | 2017-11-28 17:50:18 |
Message-ID: | CAFPpqQGxt7oij-HF1S23uEQHZ=fAngsbe28TvXJQkijKMwwZhw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent <robjsargent(at)gmail(dot)com> wrote:
>
>> On Nov 28, 2017, at 10:17 AM, Ted Toth <txtoth(at)gmail(dot)com> wrote:
>>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'. With 100k or
>> more 'things' to migrate I'm generating a lot of statements and when I
>> try to import using psql postgres fails with 'out of memory' when
>> running on a Linux VM with 4G of memory. If I break into smaller
>> chunks say ~50K statements then thde import succeeds. I can change my
>> migration utility to generate multiple files each with a limited
>> number of INSERTs to get around this issue but maybe there's
>> another/better way?
>>
>> Ted
>>
> what tools / languages ate you using?
I'm using python to read binary source files and create the text files
contains the SQL. Them I'm running psql -f <file containing SQL>.
From | Date | Subject | |
---|---|---|---|
Next Message | Ted Toth | 2017-11-28 17:54:11 | Re: large numbers of inserts out of memory strategy |
Previous Message | Tomas Vondra | 2017-11-28 17:22:17 | Re: large numbers of inserts out of memory strategy |