From: | Alessandro Gagliardi <alessandro(at)path(dot)com> |
---|---|
To: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Indexes and Primary Keys on Rapidly Growing Tables |
Date: | 2012-02-22 00:11:04 |
Message-ID: | CAAB3BBK+oTf_25BpHy-dPrYH5Gh0k=sJN=bnqBqh0HdYoTXqSA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
True. I implemented the SAVEPOINTs solution across the board. We'll see
what kind of difference it makes. If it's fast enough, I may be able to do
without that.
On Tue, Feb 21, 2012 at 3:53 PM, Samuel Gendler
<sgendler(at)ideasculptor(dot)com>wrote:
>
> On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi <alessandro(at)path(dot)com
> > wrote:
>
>> I was thinking about that (as per your presentation last week) but my
>> problem is that when I'm building up a series of inserts, if one of them
>> fails (very likely in this case due to a unique_violation) I have to
>> rollback the entire commit. I asked about this in the novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum and was advised to use
>> SAVEPOINTs. That seems a little clunky to me but may be the best way.
>> Would it be realistic to expect this to increase performance by ten-fold?
>>
>>
> if you insert into a different table before doing a bulk insert later, you
> can de-dupe before doing the insertion, eliminating the issue entirely.
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Alessandro Gagliardi | 2012-02-22 23:50:57 | set autovacuum=off |
Previous Message | Samuel Gendler | 2012-02-21 23:53:01 | Re: Indexes and Primary Keys on Rapidly Growing Tables |