From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | Samuel Sieb <samuel(at)sieb(dot)net>, Jan Wieck <JanWieck(at)Yahoo(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance monitor signal handler |
Date: | 2001-03-17 19:11:51 |
Message-ID: | 5086.984856311@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> Even better, have an SQL table updated with the per-table stats
> periodically.
>>
>> That will be horribly expensive, if it's a real table.
> But per-table stats aren't something that people will look at often,
> right? They can sit in the collector's memory for quite a while. See
> people wanting to look at per-backend stuff frequently, and that is why
> I thought share memory should be good, and a global area for aggregate
> stats for all backends.
>> I think you missed the point that somebody made a little while ago
>> about waiting for functions that can return tuple sets. Once we have
>> that, the stats tables can be *virtual* tables, ie tables that are
>> computed on-demand by some function. That will be a lot less overhead
>> than physically updating an actual table.
> Yes, but do we want to keep these stats between postmaster restarts?
> And what about writing them to tables when our storage of table stats
> gets too big?
All those points seem to me to be arguments in *favor* of a virtual-
table approach, not arguments against it.
Or are you confusing the method of collecting stats with the method
of making the collected stats available for use?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2001-03-17 20:10:05 | Re: Performance monitor signal handler |
Previous Message | Bruce Momjian | 2001-03-17 17:43:25 | Re: Performance monitor signal handler |