From: | Dave Cramer <davecramer(at)postgres(dot)rocks> |
---|---|
To: | Niels Jespersen <NJN(at)dst(dot)dk> |
Cc: | prachi surangalikar <surangalikarprachi100(at)gmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Insertion time is very high for inserting data in postgres |
Date: | 2021-02-10 12:12:14 |
Message-ID: | CADK3HHL7gL+gZj-1Y28O8GRM_v1EPhq3RkrbU01-5aPtPdFLug@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 10 Feb 2021 at 06:11, Niels Jespersen <NJN(at)dst(dot)dk> wrote:
> >Fra: prachi surangalikar <surangalikarprachi100(at)gmail(dot)com>
>
> >
>
> >Hello Team,
>
> >Greetings!
>
> >
>
> >We are using Postgres 12.2.1 for fetching per minute data for about 25
> machines but running parallely via a single thread in python.
>
> >But suddenly the insertion time has increased to a very high level, about
> 30 second for one machine.
>
> >We are in so much problem as the data fetching is becoming slow.
>
Before anyone can help you, you will have to provide much more information.
Schema, data that you are inserting, size of the machine, configuration
settings. etc.
Dave
> >
>
> >if anyone could help us to solve this problem it would be of great help
> to us.
>
>
>
> Get your data into a Text.IO memory structure and then use copy
> https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from
>
>
>
> This is THE way of high-performant inserts using Postgres.
>
>
>
> Regards Niels Jespersen
>
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Olarte | 2021-02-10 12:30:43 | Re: Increased size of database dump even though LESS consumed storage |
Previous Message | Thorsten Schöning | 2021-02-10 11:46:15 | Re: Increased size of database dump even though LESS consumed storage |