From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Stephen Frost" <sfrost(at)snowman(dot)net>, "Markus Schaber" <schabios(at)logi-track(dot)com> |
Cc: | "PostgreSQL Performance List" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Data Warehouse Reevaluation - MySQL vs Postgres -- |
Date: | 2004-09-14 20:38:49 |
Message-ID: | NOEFLCFHBPDAFHEIPGBOEEHKCEAA.simon@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Stephen Frost
> * Markus Schaber (schabios(at)logi-track(dot)com) wrote:
> > Generally, what is the fastest way for doing bulk processing of
> > update-if-primary-key-matches-and-insert-otherwise operations?
>
> This is a very good question, and I havn't seen much of an answer to it
> yet. I'm curious about the answer myself, actually. In the more recent
> SQL specs, from what I understand, this is essentially what the 'MERGE'
> command is for. This was recently added and unfortunately is not yet
> supported in Postgres. Hopefully it will be added soon.
>
Yes, I think it is an important feature for both Data Warehousing (used in
set-operation mode for bulk processing) and OLTP (saves a round-trip to the
database, so faster on single rows also). It's in my top 10 for 2005.
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2004-09-14 21:45:40 | Re: disk performance benchmarks |
Previous Message | Pierre-Frédéric Caillaud | 2004-09-14 19:27:55 | Re: Large # of rows in query extremely slow, not using |