| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
| Cc: | Samuel Sieb <samuel(at)sieb(dot)net>, Jan Wieck <JanWieck(at)Yahoo(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Performance monitor signal handler |
| Date: | 2001-03-17 17:38:36 |
| Message-ID: | 1482.984850716@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> The only open issue is per-table stuff, and I would like to see some
> circular buffer implemented to handle that, with a collection process
> that has access to shared memory.
That will get us into locking/contention issues. OTOH, frequent trips
to the kernel to send stats messages --- regardless of the transport
mechanism chosen --- don't seem all that cheap either.
> Even better, have an SQL table updated with the per-table stats
> periodically.
That will be horribly expensive, if it's a real table.
I think you missed the point that somebody made a little while ago
about waiting for functions that can return tuple sets. Once we have
that, the stats tables can be *virtual* tables, ie tables that are
computed on-demand by some function. That will be a lot less overhead
than physically updating an actual table.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bruce Momjian | 2001-03-17 17:43:25 | Re: Performance monitor signal handler |
| Previous Message | Tom Lane | 2001-03-17 17:31:20 | Re: beta6 pg_restore core dumps |