From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Simon Riggs <simon(at)2ndQuadrant(dot)com>, Martin Pihlak <martin(dot)pihlak(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: reducing statistics write overhead |
Date: | 2008-09-06 22:30:37 |
Message-ID: | 20080906223037.GC4051@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane escribió:
> Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > - Maybe we oughta have separate files, one for each database? That way
> > we'd reduce unnecessary I/O traffic for both the reader and the writer.
>
> The signaling would become way too complex, I think. Also what do you
> do about shared tables?
They are already stored in a "separate database" (denoted with
InvalidOid dbid), and autovacuum grabs it separately. I admit I don't
know what do regular backends do about it.
As for signalling, maybe we could implement something like we do for the
postmaster signal stuff: the requestor stores a dbid in shared memory
and sends a SIGUSR2 to pgstat or some such. We'd have enough shmem
space for a reasonable number of requests, and pgstat consumes them from
there into local memory (similar to what Andrew proposes for
LISTEN/NOTIFY); it stores the dbid and PID of the requestor. As soon as
the request has been fulfilled, pgstat responds by <fill in magical
mechanism that Martin is about to propose> to that particular backend.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-09-06 22:32:52 | Re: [PATCH] "\ef <function>" in psql |
Previous Message | Tom Lane | 2008-09-06 22:21:01 | Re: [HACKERS] TODO item: Implement Boyer-Moore searching (First time hacker) |