From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "Rod Taylor" <pg(at)rbt(dot)ca> |
Cc: | <pgsql-hackers(at)postgresql(dot)org>,"Neil Conway" <neilc(at)samurai(dot)com>, "Peter Brant" <Peter(dot)Brant(at)wicourts(dot)gov> |
Subject: | Re: fsutil ideas |
Date: | 2006-02-24 17:18:57 |
Message-ID: | 43FEEBA1.EE98.0025.0@wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>>> On Fri, Feb 24, 2006 at 10:57 am, in message
<1140800266(dot)5092(dot)144(dot)camel(at)home>,
Rod Taylor <pg(at)rbt(dot)ca> wrote:
>
> PostgreSQL seems to deal with out of diskspace situations pretty
well
> when it impacts a tablespace (global stuff like WAL or
subtransactions
> have issues -- but they grow slowly) as far as only interrupting
service
> for the individual actions that ran out.
We haven't used tablespace features yet, as 3 of the 4 databases
running PostgreSQL so far are on Windows. We have run out of space a
couple times, and it seems like it handles it well in terms of not
corrupting the database, and resuming OK once some space is freed. The
messages are not that clear -- some sort of generic I/O write error, as
I recall, instead of "out of disk space" being clearly stated.
> You may wish to look at funding toggles that can configure the
maximum
> memory usage and maximum temporary diskspace (different tablespaces
with
> filesystem quotas) on a per user basis similar to the
statement_timeout
> limitations in place today.
That wouldn't help because the vast majority of the work is done
through a middle tier which uses a connection pool shared by all users.
It does take some human review and judgment to ensure that a query which
is running long and/or using a lot of temp table space is really a
problem as opposed to one of our larger legitimate processes.
> I'm curious as to how you monitor for total transaction time length
to
> ensure that vacuum is able to do its thing, particularly when the
> transaction is active (not IDLE).
We run a database vacuum nightly and review it the next day.
(Something will need to be done to automate this with summaries and
exception lists when we get more than a few databases on PostgreSQL. We
can't have a person reviewing 100 of these every day.) We've not had
any nightly vacuum fail to finish. They did start running a tad long
until we did some aggressive maintenance at one point.
Our autovacuum is configured with fairly aggressive parameters,
compared to the default; but, even so, only a few small tables with high
update rates normally reach the thresholds. I haven't noticed the
autovacuum getting held up on these.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-02-24 17:22:57 | Re: fsutil ideas |
Previous Message | Bruce Momjian | 2006-02-24 17:18:22 | Re: [HACKERS] Patch Submission Guidelines |