| From: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
|---|---|
| To: | Steve Gerhardt <ocean(at)ocean(dot)fraknet(dot)org> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: UPDATE on two large datasets is very slow |
| Date: | 2007-04-04 06:20:43 |
| Message-ID: | 20070404062043.GB22542@svana.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote:
> I've been working for the past few weeks on porting a closed source
> BitTorrent tracker to use PostgreSQL instead of MySQL for storing
> statistical data, but I've run in to a rather large snag. The tracker in
> question buffers its updates to the database, then makes them all at
> once, sending anywhere from 1-3 MiB of query data. With MySQL, this is
> accomplished using the INSERT INTO...ON DUPLICATE KEY UPDATE query,
> which seems to handle the insert/update very quickly; generally it only
> takes about a second for the entire set of new data to be merged.
For the record, this is what the SQL MERGE command is for... I don't
think anyone is working on implementing that though...
Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Manoj Lal | 2007-04-04 06:42:48 | Postgres 7.4: how to disconnect users without restarting postmaster |
| Previous Message | Nikolay Moskvichev | 2007-04-04 05:44:57 | Storing blobs in PG DB |