From: | Marko Tiikkaja <marko(dot)tiikkaja(at)cs(dot)helsinki(dot)fi> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Daniel Loureiro <daniel(at)termasa(dot)com(dot)br>, Jaime Casanova <jaime(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Csaba Nagy <ncslists(at)googlemail(dot)com> |
Subject: | Re: DELETE with LIMIT (or my first hack) |
Date: | 2010-11-30 19:47:24 |
Message-ID: | 4CF554CC.9040708@cs.helsinki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
While reading this thread, I thought of two things I think we could do
if this feature was implemented:
1. Sort large UPDATE/DELETEs so it is done in heap order
This is actually a TODO item. I imagine it would be possible to do
something like:
DELETE FROM foo USING (...) ORDER BY ctid;
with this patch to help this case.
2. Reducing deadlocks in big UPDATE/DELETEs
One problem that sometimes occurs when doing multiple multi-row UPDATEs
or DELETEs concurrently is that the transactions end up working on the
same rows, but in a different order. One could use an ORDER BY clause
to make sure the transactions don't deadlock.
Thoughts?
Regards,
Marko Tiikkaja
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-11-30 19:50:46 | KNNGIST next step: adjusting indexAM API |
Previous Message | Jeff Davis | 2010-11-30 19:45:39 | Re: DELETE with LIMIT (or my first hack) |