Re: pg_stat_*_columns?

From: Joel Jacobson <joel(at)trustly(dot)com>
To: Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_stat_*_columns?
Date: 2015-07-06 13:13:35
Message-ID: CAASwCXd1fLENdk1WPCCuqYZYoVqzCGR8fPzgDW1xEb61rzX4eg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jun 29, 2015 at 11:14 PM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
wrote:

> What might be interesting is setting things up so the collector simply
> inserted into history tables every X seconds and then had a separate
> process to prune that data. The big problem with that is I see no way for
> that to easily allow access to real-time data (which is certainly necessary
> sometimes)

I think the idea sounds promising. If near real-time data is required, we
could just update once every second, which should be often enough for
everybody.

Each backend process could then simply INSERT the stats for each txn that
committed/rollbacked into an UNLOGGED table, and then the collector would
do one single UPDATE of the collector stats based on the aggregate of the
rows inserted since the previous update a second ago and then delete the
processed rows (naturally in one operation, using DELETE FROM .. RETURNING
*).

That way we could get rid of the legacy communication protocol between the
backends and the collector and instead rely on unlogged tables for the
submission of data from the backends to the collector.

INSERTing 100 000 rows to an unlogged table takes 70 ms on my laptop, so
should be fast enough to handle the 10s of thousands of updates per second
we need to handle.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2015-07-06 13:14:30 Re: PATCH:do not set Win32 server-side socket buffer size on windows 2012
Previous Message Fujii Masao 2015-07-06 12:01:21 Re: pg_archivecleanup, and backup filename to specify as an argument