From: | Vegard Bønes <vegard(dot)bones(at)met(dot)no> |
---|---|
To: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Large amount of serialization errors in transactions |
Date: | 2013-05-15 06:15:57 |
Message-ID: | 2005185803.47549.1368598557211.JavaMail.root@imap1b |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks, I tried increasing max_pred_locks_per_transaction by a factor 20, and that seems to have helped. btw, is there any reason, beside memory concerns, not to have a very high value for max_pred_locks_per_transaction?
- Vegard
----- Original Message -----
Fra: "Kevin Grittner" <kgrittn(at)ymail(dot)com>
Til: "Vegard Bønes" <vegard(dot)bones(at)met(dot)no>, pgsql-general(at)postgresql(dot)org
Sendt: 10. mai 2013 20:35:05
Emne: Re: [GENERAL] Large amount of serialization errors in transactions
Vegard Bønes <vegard(dot)bones(at)met(dot)no> wrote:
> I have a problem understanding how transactions with serializable
> isolation level works in postgresql. What exactly may cause a
> serialization error?
A pattern of read-write dependencies among concurrent transactions
which indicate that a serialization anomaly is possible. There is
a possibility of false positives -- mostly because of the
granularity of tracking the information, to limit the RAM used to
track reads and read-write conflicts.
The granularity promotion logic is pretty simple, and we may need
to make it more sophisticated, but I haven't yet seen a real-life
use case where that would help. So far a high serailization
failure rate with serializable transactions is either intrinsic to
the nature of the workload or could be solved by increasing the RAM
used to track locks. If you have a case where a more sophisticated
heuristic would help, I would be interested in more details, but
first let's see whether things can be improved to an acceptable
level with tuning.
For details on how the logic works, you can review the examples and
an overview of the technique here:
http://wiki.postgresql.org/wiki/SSI
... which links to the documentation and to a Wiki page which was
used to manage development of the current implementation -- which
goes into more detail about how it works, and which in turn links
to the academic papers which were the foundation for the
development.
> My problem is a system where process one adds data to a database.
> Shortly afterwards, process two reads and possibly modifies the
> same data (keys are not touched). When large amounts of data
> arrives at about the same time, I get loads of serialization
> errors in process two. In a perfect world this should not happen,
> since data that is entered by process one is almost always
> different from the data that at the same time is being read and
> written by process two.
It would be easier to tell what the cause is if you pasted the
exact messages which are getting logged most frequently, with
associated DETAIL.
> I have tried increasing max_pred_locks_per_transaction, but it
> seems to have no effect.
How high have you tried to set it. I have seen some reports of
people needing to set it about 10 times higher than the default to
reduce false positives to a negligible level.
> I do retry the transactions, and eventually they succeed, but my
> concern here is the amount of errors I get at certain periods. Am
> I expecting too much of serializable isolation level
> transactions, or is there anyting else that I am missing?
Have you reviewed the issues listed under "for optimal performance"
on this page?:
http://www.postgresql.org/docs/current/interactive/transaction-iso.html#XACT-SERIALIZABLE
If you allow large numbers of connections, you may see a large
benefit from using a connection pooler which funnels many logical
client connections down to a much smaller number of database
connections. This tends to help in general, but is especially
important when using serializable transactions.
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2013-05-15 07:51:15 | Re: How to convert numbers into words in postgresql |
Previous Message | Mike Christensen | 2013-05-15 05:14:43 | Re: dblink does not resolve DNS, but works with IP |