From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, David Kensiski <David(at)kensiski(dot)org> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: XID wraparound with huge pg_largeobject |
Date: | 2015-12-02 23:20:40 |
Message-ID: | 565F7CC8.1070204@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/2/15 11:18 AM, Jeff Janes wrote:
> Is there a lot of free space in pg_largeobjects table (i.e. recently
> ran vacuumlo)? I wonder if it weren't doing a very slow backwards
> scan over the table in order to truncate away unused space. The
> problem is that the backwards scan might not trigger the kernels
> read-ahead code, so every page is read in as a random IO, rather than
> sequential IO. This can look a lot like doing nothing, depending on
> what monitoring tools you use. the disk is always busy, but just in a
> horribly inefficient way.
>
> Assuming you are on minor release 9.1.10 or later, if this is the case
> you should be able to just have another session do a `lock TABLE
> pg_largeobject in access share mode;` and hold the lock for a while.
> This will cause the vacuum to abandon the truncation scan and finish
> up the accounting for the freezing. You can then worry about
> finishing up the truncation once the wrap-around danger is over.
Another option would be to manually read in the heap files at the OS
level so they're at least in the kernel cache. Looking at the "max()" of
the ctid column in the table would give you an idea of what's going on
here. (I put max() in quotes because there aren't real operators on
ctid. I suspect the easiest way to simulate this would be to create a
compound type (block int, lp smallint), cast ctid to that and then do
max(block*2^16+lp).)
I would *definitely* at least start a vacuum freeze on the table. You
really don't want to get backed into a corner on this.
Something else to keep in mind is that a full-scan vacuum (which is what
this needs to be) will block on a block if any other process has a pin
on it (and by doing so, block anyone else from getting a pin on that
block until it's done). If you have other processes hitting this table
heavily that could cause a problem. Unfortunately, I don't think this
locking would show up in pg_locks, so it's hard to diagnose. (BTW, I
think the logic to optionally skip this lock was added in 9.2, in which
case *any* vacuum you run could run into this problem).
You might want to watch the talk I gave at pgCon about the details of
how vacuum works. Note that's all based on 9.4 though, so YMMV.
http://www.pgcon.org/2015/schedule/events/829.en.html
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2015-12-02 23:25:37 | Re: Pgbasebackup help |
Previous Message | Jim Nasby | 2015-12-02 23:01:14 | Re: Can row level security policies also be implemented for views? |