| From: | Tommy Gildseth <tommy(at)gildseth(dot)com> |
|---|---|
| To: | ocean(at)ocean(dot)fraknet(dot)org, pgsql-general(at)postgresql(dot)org |
| Subject: | Re: UPDATE on two large datasets is very slow |
| Date: | 2007-04-04 09:34:22 |
| Message-ID: | 4613711E.6060904@gildseth.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Martijn van Oosterhout wrote:
> On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote:
>
>> I've been working for the past few weeks on porting a closed source
>> BitTorrent tracker to use PostgreSQL instead of MySQL for storing
>> statistical data, but I've run in to a rather large snag. The tracker in
>> question buffers its updates to the database, then makes them all at
>> once, sending anywhere from 1-3 MiB of query data. With MySQL, this is
>> accomplished using the INSERT INTO...ON DUPLICATE KEY UPDATE query,
>> which seems to handle the insert/update very quickly; generally it only
>> takes about a second for the entire set of new data to be merged.
>>
>
> For the record, this is what the SQL MERGE command is for... I don't
> think anyone is working on implementing that though...
>
This will possibly provide a solution to this question:
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
--
Tommy
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Huynh Ngoc Doan | 2007-04-04 09:42:20 | Need your help on using "partion" |
| Previous Message | Ben Trewern | 2007-04-04 08:06:42 | Re: Webappication and PostgreSQL login roles |