From: | "Drouvot, Bertrand" <bdrouvot(at)amazon(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de>, Greg Stark <stark(at)mit(dot)edu> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: shared-memory based stats collector - v70 |
Date: | 2022-08-18 06:39:02 |
Message-ID: | 394d309c-ece2-75b4-9959-9f4537001359@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 8/18/22 1:30 AM, Andres Freund wrote:
> Hi,
>
> On 2022-08-17 15:46:42 -0400, Greg Stark wrote:
>> Isn't there also a local hash table used to find the entries to reduce
>> traffic on the shared hash table? Even if you don't take a snapshot
>> does it get entered there? There are definitely still parts of this
>> I'm working on a pretty vague understanding of :/
> Yes, there is. But it's more about code that generates stats, rather than
> reporting functions. While there's backend local pending stats we need to have
> a refcount on the shared stats item so that the stats item can't be dropped
> and then revived when those local stats are flushed.
What do you think about something along those lines for the reporting
part only?
Datum
pgstat_fetch_all_tables_stats(PG_FUNCTION_ARGS)
{
int dbid = PG_ARGISNULL(0) ? -1 : (int) PG_GETARG_OID(0);
ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
dshash_seq_status hstat;
PgStatShared_HashEntry *p;
PgStatShared_Common *stats_data;
/* Only members of pg_read_all_stats can use this function */
if (!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))
{
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_FUNCTION,
"pgstat_fetch_all_tables_stats");
}
pgstat_assert_is_up();
SetSingleFuncCall(fcinfo, 0);
dshash_seq_init(&hstat, pgStatLocal.shared_hash, false);
while ((p = dshash_seq_next(&hstat)) != NULL)
{
Datum values[PG_STAT_GET_ALL_TABLES_STATS_COLS];
bool nulls[PG_STAT_GET_ALL_TABLES_STATS_COLS];
PgStat_StatTabEntry * tabentry = NULL;
MemSet(values, 0, sizeof(values));
MemSet(nulls, false, sizeof(nulls));
/* If looking for specific dbid, ignore all the others */
if (dbid != -1 && p->key.dboid != (Oid) dbid)
continue;
/* If the entry is not of kind relation then ignore it */
if (p->key.kind != PGSTAT_KIND_RELATION)
continue;
/* If the entry has been dropped then ignore it */
if (p->dropped)
continue;
stats_data = dsa_get_address(pgStatLocal.dsa, p->body);
LWLockAcquire(&stats_data->lock, LW_SHARED);
tabentry = pgstat_get_entry_data(p->key.kind, stats_data);
values[0] = ObjectIdGetDatum(p->key.dboid);
values[1] = ObjectIdGetDatum(p->key.objoid);
values[2]= DatumGetInt64(tabentry->tuples_inserted);
values[3]= DatumGetInt64(tabentry->tuples_updated);
values[4]= DatumGetInt64(tabentry->tuples_deleted);
.
.
.
tuplestore_putvalues(rsinfo->setResult, rsinfo->setDesc,
values, nulls);
LWLockRelease(&stats_data->lock);
}
dshash_seq_term(&hstat);
return (Datum) 0;
}
I also tried to make use of pgstat_get_entry_ref() but went into a
failed assertion: pgstat_get_entry_ref -> dshash_find ->
ASSERT_NO_PARTITION_LOCKS_HELD_BY_ME(hash_table) due to lock acquired by
dshash_seq_next().
Regards,
--
Bertrand Drouvot
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Smith | 2022-08-18 06:51:08 | Re: Perform streaming logical transactions by background workers and parallel apply |
Previous Message | Natarajan R | 2022-08-18 06:32:32 | Re: Compressed pluggable storage experiments |