From: | "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net> |
---|---|
To: | shridhar_daithankar(at)persistent(dot)co(dot)in |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Auto Vacuum Daemon (again...) |
Date: | 2002-11-28 08:11:33 |
Message-ID: | 1038471093.3291.49.camel@zeutrh80 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 2002-11-28 at 01:58, Shridhar Daithankar wrote:
> There are differences in approach here. The reason I prefer polling rather than
> signalig is IMO vacuum should always be a low priority activity and as such it
> does not deserve a signalling overhead.
>
> A simpler way of integrating would be writing a C trigger on pg_statistics
> table(forgot the exact name). For every insert/update watch the value and
> trigger the vacuum daemon from a separate thread. (Assuming that you can create
> a trigger on view)
>
> But Tom has earlier pointed out that even a couple of lines of trigger on such
> a table/view would be a huge performance hit in general..
>
> I would still prefer polling. It would serve the need for foreseeable future..
Well this is a debate that can probably only be solved after doing some
legwork, but I was envisioning something that just monitored the same
messages that get send to the stats collector, I would think that would
be pretty lightweight, or even perhaps extending the stats collector to
also fire off the vacuum processes since it already has all the
information we are polling for.
> The reason I brought up issue of multiple processes/connection is starvation of
> a DB.
>
> Say there are two DBs which are seriously hammered. Now if a DB starts
> vacuuming and takes long, another DB just keeps waiting for his turn for
> vacuuming and by the time vacuum is triggered, it might already have suffered
> some performance hit.
>
> Of course these things are largely context dependent and admin should be abe to
> make better choice but the app. should be able to handle the worst situation..
agreed
> The other way round is make AVD vacuum only one database. DBA can launch
> multiple instances of AVD for each database as he sees fit. That would be much
> simpler..
interesting thought. I think this boils down to how many knobs do we
need to put on this system. It might make sense to say allow upto X
concurrent vacuums, a 4 processor system might handle 4 concurrent
vacuums very well. I understand what you are saying about starvation, I
was erring on the conservative side by only allowing one vacuum at a
time (also simplicity of code :-) Where the worst case scenario is that
you "suffer some performance hit" but the hit would be finite since
vacuum will get to it fairly soon.
> Please send me the code offlist. I would go thr. it and get back to you by
> early next week(bit busy, right now)
already sent.
From | Date | Subject | |
---|---|---|---|
Next Message | Hans-Jürgen Schönig | 2002-11-28 08:22:09 | Re: nested transactions |
Previous Message | chris | 2002-11-28 07:50:18 | InitDB Failure - PostgreSQL 7.2, RedHat 7.3, compile from source |