From: | Vitalii Tymchyshyn <tivv00(at)gmail(dot)com> |
---|---|
To: | alan <alan(dot)miller3(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: insert |
Date: | 2011-08-01 08:52:17 |
Message-ID: | 4E366941.9070507@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello.
Please note that in multitasking environment you may have problems with
your code. Two connections may check if "a" is available and if not (and
both got empty "select" result), try to insert. One will succeed,
another will fail if you have a unique constraint on category name (and
you'd better have one).
Please note that select for update won't help you much, since this is
new record you are looking for, and select don't return (and lock) it. I
am using "lock table <tableName> in SHARE ROW EXCLUSIVE mode" in this case.
But then, if you have multiple lookup dictinaries, you need to ensure
strict order of locking or you will be getting deadlocks. As for me, I
did create a special application-side class to retrieve such values. If
I can't find a value in main connection with simple select, I open new
connection, perform table lock, check if value is in there. If it is
not, add the value and commit. This may produce orphaned dictionary
entries (if dictionary entry is committed an main transaction is rolled
back), but this is usually OK for dictionaries. At the same time I don't
introduce hard locks into main transaction and don't have to worry about
deadlocks.
Best regards, Vitalii Tymchyshyn
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2011-08-01 08:54:45 | Re: Performance die when COPYing to table with bigint PK |
Previous Message | Adarsh Sharma | 2011-08-01 07:31:30 | How to Speed up Insert from Multiple Connections |