| From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> | 
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org | 
| Subject: | Re: poor performance when recreating constraints on large tables | 
| Date: | 2011-06-08 19:45:33 | 
| Message-ID: | BANLkTikpLO0Tz07LGK2G5cVXrG0DA1xCMA@mail.gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
On Wed, Jun 8, 2011 at 12:28 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Mon, Jun 6, 2011 at 6:10 PM, Mike Broers <mbroers(at)gmail(dot)com> wrote:
> > Thanks for the suggestion, maintenance_work_mem is set to the default of
> > 16MB on the host that was taking over an hour as well as on the host that
> > was taking less than 10 minutes.  I tried setting it to 1GB on the faster
> > test server and it reduced the time from around 6-7 minutes to about
> 3:30.
> >  this is a good start, if there are any other suggestions please let me
> know
> > - is there any query to check estimated time remaining on long running
> > transactions?
>
> Sadly, no.  I suspect that coming up with a good algorithm for that is
> a suitable topic for a PhD thesis.  :-(
>
>
The planner knows how many rows are expected for each step of the query
plan, so it would be theoretically possible to compute how far along it is
in processing a query based on those estimates, wouldn't it?  Combine
percentage complete with time elapsed and you could get somewhat close if
the stats are accurate, couldn't you?  Of course, I have no clue as to the
internals of the planner and query executor which might or might not make
such tracking of query execution possible.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Kevin Grittner | 2011-06-08 19:53:32 | Re: poor performance when recreating constraints on large tables | 
| Previous Message | Samuel Gendler | 2011-06-08 19:38:42 | Re: Oracle v. Postgres 9.0 query performance |