From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: buildfarm logging versus embedded nulls |
Date: | 2010-03-12 22:46:25 |
Message-ID: | 20100312224625.GJ3663@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
> Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > I also think the autovacuum worker minimum timestamp may be playing
> > games with the retry logic too. Maybe a worker is requesting a new file
> > continuously because pgstat is not able to provide one before the
> > deadline is past, and thus overloading it. I still think that 500ms is
> > too much for a worker, but backing off all the way to 10ms seems too
> > much. Maybe it should just be, say, 100ms.
>
> But we don't advance the deadline within the wait loop, so (in theory)
> a single requestor shouldn't be able to trigger more than one stats file
> update.
Hmm, yeah.
> I wonder though if an autovac worker could make many such
> requests over its lifespan ...
Well, yes, but it will request fresh stats only for the recheck logic
before each table, so there will be one intervening vacuum (or none,
actually, if the table was vacuumed by some other autovac worker.
Though given the default naptime of 1 min I find it unlikely that the
regression database will ever see more than one worker).
Since the warning comes from the launcher and not the worker, I wonder
if this is a red herring.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-03-12 23:19:59 | Re: buildfarm logging versus embedded nulls |
Previous Message | Robert Haas | 2010-03-12 22:41:18 | Re: Dyamic updates of NEW with pl/pgsql |