From: | Marti Raudsepp <marti(at)juffo(dot)org> |
---|---|
To: | Robert Klemme <shortcutter(at)googlemail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres for a "data warehouse", 5-10 TB |
Date: | 2011-09-13 15:13:53 |
Message-ID: | CABRT9RAJvq0bzBnqtEzc=80H7oYPYC4LggKWZOYcDbRRMnskJA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Sep 13, 2011 at 00:26, Robert Klemme <shortcutter(at)googlemail(dot)com> wrote:
> In the case of PG this particular example will work:
> 1. TX inserts new PK row
> 2. TX tries to insert same PK row => blocks
> 1. TX commits
> 2. TX fails with PK violation
> 2. TX does the update (if the error is caught)
That goes against the point I was making in my earlier comment. In
order to implement this error-catching logic, you'll have to allocate
a new subtransaction (transaction ID) for EVERY ROW you insert. If
you're going to be loading billions of rows this way, you will invoke
the wrath of the "vacuum freeze" process, which will seq-scan all
older tables and re-write every row that it hasn't touched yet. You'll
survive it if your database is a few GB in size, but in the terabyte
land that's unacceptable. Transaction IDs are a scarce resource there.
In addition, such blocking will limit the parallelism you will get
from multiple inserters.
Regards,
Marti
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Klemme | 2011-09-13 16:34:09 | Re: Postgres for a "data warehouse", 5-10 TB |
Previous Message | Anthony Presley | 2011-09-13 11:56:19 | PG 9.x prefers slower Hash Joins? |