how to monitor the progress of really large bulk operations?

From: "Mike Sofen" <msofen(at)runbox(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: how to monitor the progress of really large bulk operations?
Date: 2016-09-27 21:03:08
Message-ID: 00a301d21902$8e93de10$abbb9a30$@runbox.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi gang,

On PG 9.5.1, linux, I'm running some large ETL operations, migrate data from
a legacy mysql system into PG, upwards of 250m rows in a transaction (it's
on a big box). It's always a 2 step operation - extract raw mysql data and
pull it to the target big box into staging tables that match the source, the
second step being read the landed dataset and transform it into the final
formats, linking to newly generated ids, compressing big subsets into jsonb
documents, etc.

While I could break it into smaller chunks, it hasn't been necessary, and it
doesn't eliminate my need: how to view the state of a transaction in
flight, seeing how many rows have been read or inserted (possible for a
transaction in flight?), memory allocations across the various PG processes,
etc.

Possible or a hallucination?

Mike Sofen (Synthetic Genomics)

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kevin Grittner 2016-09-27 21:11:41 Re: Update two tables returning id from insert CTE Query
Previous Message Patrick B 2016-09-27 20:33:39 Re: Update two tables returning id from insert CTE Query