Re: Massive table (500M rows) update nightmare

From: "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Massive table (500M rows) update nightmare
Date: 2010-01-07 17:49:54
Message-ID: hi56qq$1igu$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> Got an explain analyze of the delete query?

UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''

Index Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1
width=608) (actual time=0.081..0.244 rows=10 loops=1)
Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=
319400010))
Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms

Hard to tell how reliable these numbers are, because the caches are likely
spun up for the WHERE clause - in particular, SELECT queries have been run
to test whether the rows actually qualify for the update.

The coalesce may be slowing things down slightly, but is a necessary evil.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Sabino Mullane 2010-01-07 17:56:16 Re: Massive table (500M rows) update nightmare
Previous Message Kevin Kempter 2010-01-07 17:46:24 Re: Massive table (500M rows) update nightmare