From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Andy Marden" <amarden(at)usa(dot)net> |
Cc: | pgsql-admin(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Long update progress |
Date: | 2002-07-19 14:04:36 |
Message-ID: | 12465.1027087476@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
"Andy Marden" <amarden(at)usa(dot)net> writes:
> We have an database batch update process running. It runs normally and takes
> around 6 hours. This is dealing with a much larger data set after an error
> correction. It's been running for 6 days now and people are getting twitchy
> that it might not finish. Is there any way (accepting that more preparation
> would, in retrospect, have been better) to tell how far we're got. This
> iterates round a cursor and updates individual rows. The trouble is it
> commits once at the end.
> The ideal would be to find a way of doing a dirty read against the table
> that is bing updated. Then we'd know how many rows had been processed.
A quick and dirty answer is just to watch the physical file for the
table being updated, and see how fast it's growing.
If you're using 7.2 then the contrib/pgstattuple function would let you
get more accurate info (note it will count not-yet-committed tuples as
"dead", which is a tad misleading, but at least it counts 'em).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Grzegorz Przeździecki | 2002-07-19 16:28:20 | PGDATA, PGDATA2, pg_database |
Previous Message | Roger Mathis | 2002-07-19 13:59:58 | Object oriented functions |
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2002-07-19 14:05:20 | Re: References for PostgreSQL |
Previous Message | Tom Lane | 2002-07-19 13:53:26 | Re: sequence scan, but indexed tables |