From: | Asmir Mustafic <goetas(at)lignano(dot)it> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | 700K Inserts in transaction |
Date: | 2013-02-14 10:28:37 |
Message-ID: | 511CBC55.9080804@lignano.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi everybody!
I'm new in mailing list, and i have a little question.
The tables are:
postalcodes (place_id, code), PK(place_id, code) 600K of rws
places (id, name), PK(id), INDEX(name) 3M of rows
I've to insert another 600k of rows into postalcodes table, in a single
transaction, omitting duplicates.
The insert query is a prepared statement like this:
INSERT INTO postalcodes (place_id, code)
SELECT places.id, :code
FROM places
LEFT JOIN postalcodes (postalcodes.place_id = places.id and
postalcodes.code = :code)
WHERE places.name = :name AND postalcodes.place_id IS NULL
Inserting rows works well (3000 queries per second), but when i reach
30K of executed statements, the insert rate slows down to 500/1000
queries per second).
Doing a commit every 20K of inserts, the insert rate remain 3000 queries
per second.
There is a limit of inserts in a transaction?
From | Date | Subject | |
---|---|---|---|
Next Message | Tory M Blue | 2013-02-14 10:49:16 | PG_XLOG 27028 files running out of space |
Previous Message | Dan Kogan | 2013-02-14 01:30:38 | Re: High CPU usage / load average after upgrading to Ubuntu 12.04 |