From: | Olivier Gautherot <ogautherot(at)gautherot(dot)net> |
---|---|
To: | Sreejith P <sreejith(at)lifetrenz(dot)com> |
Cc: | Rohit Rajput <rht(dot)rajput(at)yahoo(dot)com>, "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Insert 1 million data |
Date: | 2020-12-29 12:33:27 |
Message-ID: | CAJ7S9TX0smQF6_hNf3EbcPGGeKYQpHwdUmyXsnFo7hhASiq3CA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi Sreejit,
On Tue, Dec 29, 2020 at 10:56 AM Sreejith P <sreejith(at)lifetrenz(dot)com> wrote:
> Thanks Rohit.
>
>
>
> After upgrading volume getting following error. Almost same as previous
> one.
>
>
>
> We have increased backup volume and run the Job Again . When I reach 900
> thousand records, getting almost similar error again.
>
>
>
> - Do I need to turn off auto vaccum ?
> - Shall increase maintance work mem ?
>
>
If you're tight on space, my recommendation would be to run the inserts in
small batches (say 10,000 at a time). Don't turn off autovaccum, ever :-)
That being said, if you're suffering this way when creating your database,
my inclination would be to move it with its logs to a disk with more space.
Your server has no scalability and you'll suffer more dramatic crashes very
quickly.
My cent worth...
--
Olivier Gautherot
From | Date | Subject | |
---|---|---|---|
Next Message | Deepak tyagi | 2020-12-29 22:59:59 | Patroni issue |
Previous Message | Dischner, Anton | 2020-12-29 10:24:38 | AW: Insert 1 million data |