From: | "Maria L(dot) Wilson" <Maria(dot)L(dot)Wilson-1(at)nasa(dot)gov> |
---|---|
To: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
Cc: | "Wilson, Maria Louise (LARC-E301)[SCIENCE SYSTEMS AND APPLICATIONS, INC]" <m(dot)l(dot)wilson(at)nasa(dot)gov>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: constraint triggers |
Date: | 2011-09-29 16:24:13 |
Message-ID: | 4E849BAD.1090405@nasa.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
thanks Craig - we are doing some testing with some of your ideas
now.... Hopefully we can get this solved so using triggers doesn't
cause so much contention. I also understand that foreign keys can also
cause some of these issues. Do you have any thoughts on that?
thanks again, Maria Wilson
On 9/28/11 10:39 PM, Craig Ringer wrote:
> On 09/28/2011 08:54 PM, Maria L. Wilson wrote:
>
>> UPDATE dataset
>> SET gracount = gracount + 1
>> WHERE dataset.inv_id = NEW.inv_id;
> That'll serialize all transactions that touch the same inv_id, so only
> one may run at once. The second and subsequent ones will block waiting
> for an update lock on the `dataset' tuple for `inv_id'.
>
> When you think about it that's necessary to prevent a wrong result when
> transaction A then B run this statement, transaction B commits, and
> transaction A rolls back. What's the correct answer?
>
> To fix this you'll need to change how you maintain your `dataset' table.
> Exactly how depends on your requirements.
>
> You can trade read performance off against write performance by
> INSERTing new rows instead of UPDATEing them, so you do a:
>
> SELECT count(inv_id) FROM dataset WHERE dataset.inv_id = ?
>
> instead of a
>
> SELECT gracount FROM dataset WHERE dataset.inv_id = ?
>
> to retrieve your count.
>
> You can have both fast reads and fast writes if you accept potentially
> out-of-date `gracount' data, maintaining `gracount' as a materialized
> view that you refresh using LISTEN/NOTIFY *after* a transaction commits.
> It's possible for it to be a little out of date, but writers no longer
> interfere with each other and readers no longer have to re-do the
> counting/aggregation work.
>
> You can live with serializing writes like you currently do in exchange
> for the greater read performance of maintaining counts. This may not be
> so bad once you understand what's happening and can keep transactions
> that trigger this problem short, preventing them from blocking others
> while they do unrelated work.
>
> In the end, this looks like a simple problem but it isn't when you
> consider the possibility of transactions rolling back.
>
>> Our first try to solve this problem has been to convert these triggers
>> into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED
>> flags. This, we are finding, is forcing the trigger function to run
>> after the triggering transaction is completed. We believe this will fix
>> our locking problem and hopefully speed up our inserts again.
> That should help, but it's a form of trading timeliness off against
> performance. Queries within the same transaction won't see the updated
> `dataset' values, so if you're relying on them for correct operation
> later in the transaction you could have issues.
>
> --
> Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Armin Resch | 2011-09-29 17:00:36 | Re: diagnosing a db crash - server exit code 2 |
Previous Message | bricklen | 2011-09-29 16:08:49 | Re: diagnosing a db crash - server exit code 2 |