From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Godfrin, Philippe E" <Philippe(dot)Godfrin(at)nov(dot)com> |
Cc: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Inserts and bad performance |
Date: | 2021-11-24 19:31:53 |
Message-ID: | 185058.1637782313@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Godfrin, Philippe E" <Philippe(dot)Godfrin(at)nov(dot)com> writes:
> I am inserting a large number of rows, 5,10, 15 million. The python code commits every 5000 inserts. The table has partitioned children.
> At first, when there were a low number of rows inserted, the inserts would run at a good clip - 30 - 50K inserts per second. Now, after inserting oh say 1.5 Billion rows, the insert rate has dropped to around 5000 inserts per second. I dropped the unique index , rebuilt the other indexes and no change. The instance is 16 vcpu and 64GB ram.
Can you drop the indexes and not rebuild them till after the bulk load is
done? Once the indexes exceed available RAM, insert performance is going
to fall off a cliff, except maybe for indexes that are receiving purely
sequential inserts (so that only the right end of the index gets touched).
Also see
https://www.postgresql.org/docs/current/populate.html
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Roy | 2021-11-24 19:50:16 | Re: Inserts and bad performance |
Previous Message | Godfrin, Philippe E | 2021-11-24 19:30:01 | RE: [EXTERNAL] Re: Inserts and bad performance |