Re: LONG delete with LOTS of FK's

From: David Kerr <dmk(at)mr-paradox(dot)net>
To: Larry Rosenman <ler(at)lerctr(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general(at)postgresql(dot)org
Subject: Re: LONG delete with LOTS of FK's
Date: 2013-05-16 22:52:25
Message-ID: 20130516225224.GA92863@mr-paradox.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, May 10, 2013 at 11:01:15AM -0500, Larry Rosenman wrote:
- On 2013-05-10 10:57, Tom Lane wrote:
- >Larry Rosenman <ler(at)lerctr(dot)org> writes:
- >On 2013-05-10 09:14, Tom Lane wrote:
- >... and verify you get a cheap plan for each referencing table.
- >
- >We don't :(
- >
- >Ugh. I bet the problem is that in some of these tables, there are lots
- >and lots of duplicate account ids, such that seqscans look like a good
- >bet when searching for an otherwise-unknown id. You don't see this
- >with a handwritten test for a specific id because then the planner can
- >see it's not any of the common values.
- >
- >9.2 would fix this for you --- any chance of updating?
- >
- > regards, tom lane
- I'll see what we can do. I was looking for a reason, this may be it.
-
- Thanks for all your help.

I haven't seen an explain for this badboy, maybe I missed it (even just a
plain explain might be useful) but you may be running into a situation where
the planner is trying to materialize or hash 2 big tables.

I've actually run into that in the past and had some success in PG9.1 running
with enable_material=false for some queries.

It might be worth a shot to play with that and enable_hashagg/enable_hashjoin=false
(If you get a speedup, it points to some tuning/refactoring that could happen)

Dave

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Larry Rosenman 2013-05-16 23:01:51 Re: LONG delete with LOTS of FK's
Previous Message Fabio Rueda Carrascosa 2013-05-16 22:50:08 Re: pg_upgrade link mode