From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Joel Jacobson <joel(at)trustly(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_stat_*_columns? |
Date: | 2015-06-29 21:14:34 |
Message-ID: | 5591B53A.2060007@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 6/26/15 6:09 PM, Joel Jacobson wrote:
> Can't we just use the infrastructure of PostgreSQL to handle the few
> megabytes of data we are talking about here? Why not just store the data
> in a regular table? Why bother with special files and special data
> structures? If it's just a table we want to produce as output, why can't
> we just store it in a regular table, in the pg_catalog schema?
The problem is the update rate. I've never tried measuring it, but I'd
bet that the stats collector can end up with 10s of thousands of updates
per second. MVCC would collapse under that kind of load.
What might be interesting is setting things up so the collector simply
inserted into history tables every X seconds and then had a separate
process to prune that data. The big problem with that is I see no way
for that to easily allow access to real-time data (which is certainly
necessary sometimes).
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2015-06-29 21:16:14 | Re: 9.5 release notes |
Previous Message | Heikki Linnakangas | 2015-06-29 21:08:44 | Re: PANIC in GIN code |