From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> |
Cc: | Srinivas Karthik V <skarthikv(dot)iitb(at)gmail(dot)com>, Don Seiler <don(at)seiler(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
Subject: | Re: Bulk Insert into PostgreSQL |
Date: | 2018-07-02 01:39:08 |
Message-ID: | CAH2-Wz=-V=-pO9u4jEtgbSH+y8zpKrzigmpxzh3PMhjnudo3Mg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jul 1, 2018 at 5:19 PM, Tsunakawa, Takayuki
<tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> wrote:
> 400 GB / 15 hours = 7.6 MB/s
>
> That looks too slow. I experienced a similar slowness. While our user tried to INSERT (not COPY) a billion record, they reported INSERTs slowed down by 10 times or so after inserting about 500 million records. Periodic pstack runs on Linux showed that the backend was busy in btree operations. I didn't pursue the cause due to other businesses, but there might be something to be improved.
What kind of data was indexed? Was it a bigserial primary key, or
something else?
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Tsunakawa, Takayuki | 2018-07-02 02:07:15 | RE: Bulk Insert into PostgreSQL |
Previous Message | Craig Ringer | 2018-07-02 01:36:02 | Re: Large Commitfest items |