From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Rémi Cura <remi(dot)cura(at)gmail(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | John R Pierce <pierce(at)hogranch(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_advisory_lock problem |
Date: | 2014-08-11 20:48:44 |
Message-ID: | 1407790124.16787.YahooMailNeo@web122302.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Rémi Cura <remi(dot)cura(at)gmail(dot)com> wrote:
> as you (both?) suggested it works using advisory lock used at the
> beginning and end of the transaction. This way there is no upsert
> at all if the element is locked? (I used general advisory lockbut
> in the same way as transactionnal lock)
This is too vague to comment on.
> The issue in this case is simple : I have to use about 100k
> advisory locks, which is a big memory requirement for my hardware
> :-(
... and that doesn't seem to make any sense. Either you are not
understanding advisory locks or you are doing something very, very
unusual.
> Merlin I'm afraid I don't understand what is "vanilla" LOCK
> TABLE.
See the LOCK TABLE command.
http://www.postgresql.org/docs/current/interactive/sql-lock.html
http://www.postgresql.org/docs/current/interactive/explicit-locking.html
> I can't really use a lock table because each query upsert
> sequentially into 3 tables, doing lots of computing between.
Now *that* I understand. :-) It's not an unusual requirement,
but can be a challenge when using snapshot isolation (where writes
don't block reads and reads don't block anything). There are two
main approaches -- introduce blocking to serialize some of the
operations, or use the SERIALIZABLE transaction isolation level to
ensure that the behavior of all concurrent transactions is
consistent with the behavior you would see if they were run one at
a time. The latter approach doesn't introduce any new blocking,
but it can cause transactions to get an ERROR with a SQLSTATE of
40001 at just about any point, so you need to be prepared to
recognize that and retry those transactions from the beginning (not
just the last statement of the transaction), ignoring any data read
during the failed attempt.
You may want to read the entire chapter on concurrency control:
http://www.postgresql.org/docs/current/interactive/mvcc.html
If you are considering using SERIALIZABLE transactions, you should
probably review the examples in the Wiki, to get an idea of how it
behaves in various cases:
http://wiki.postgresql.org/wiki/SSI
> I use parallel query to compute faster (load dividing). I guess
> it would be very slow with about 8 parallel queries with locks.
Well, if you introduce blocking you reduce your parallelism, but if
you use serializable transactions and there are actually a lot of
conflicts you can see poor performance because of the errors
rolling back transactions and the need to retry them from the
start. The techniques used to implement serializable transactions
in PostgreSQL are basically a refinement of the Optimistic
Concurrency Control (OCC) techniques, but generally with far fewer
retries needed -- the point being that it optimistically assumes
that there will not be a conflict so that concurrency is better,
but has to cancel things if that optimism proves to be unfounded.
To make related to changes to multiple tables and maintain coherent
data, you probably will need to do one or the other.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Soni M | 2014-08-12 05:10:28 | Streaming replica refuse to do restore_command |
Previous Message | Rémi Cura | 2014-08-11 19:50:41 | Re: pg_advisory_lock problem |