From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Joel Jacobson <joel(at)trustly(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_stat_*_columns? |
Date: | 2015-07-06 13:35:26 |
Message-ID: | 25207.1436189726@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Joel Jacobson <joel(at)trustly(dot)com> writes:
> On Mon, Jun 29, 2015 at 11:14 PM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
> wrote:
>> What might be interesting is setting things up so the collector simply
>> inserted into history tables every X seconds and then had a separate
>> process to prune that data. The big problem with that is I see no way for
>> that to easily allow access to real-time data (which is certainly necessary
>> sometimes)
> I think the idea sounds promising. If near real-time data is required, we
> could just update once every second, which should be often enough for
> everybody.
I'd bet a good lunch that performance will be absolutely disastrous.
Even with unlogged tables, the vacuuming cost would be intolerable. Why
would we insist on pushing the envelope in what's known to be Postgres'
weakest area performance-wise?
> Each backend process could then simply INSERT the stats for each txn that
> committed/rollbacked into an UNLOGGED table,
... and if its transaction failed, how would it do that?
Regular tables are *not* what we want here, either from a semantics or
a performance standpoint.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2015-07-06 13:43:15 | Re: WAL logging problem in 9.4.3? |
Previous Message | Tom Lane | 2015-07-06 13:30:28 | Re: PATCH:do not set Win32 server-side socket buffer size on windows 2012 |