Re: pg_toast oid limits

From: Natalie Wenz <nataliewenz(at)ebureau(dot)com>
To: pgsql-admin <pgsql-admin(at)postgresql(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, david(dot)g(dot)johnston(at)gmail(dot)com
Subject: Re: pg_toast oid limits
Date: 2016-10-26 22:12:59
Message-ID: 8B647628-89BE-43FE-B538-973BF85A79A9@ebureau.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Thanks, I'm running that now. (/twiddles thumbs) The disks are pretty much saturated at the moment, so everything is taking a long time.

I'm a little confused about the limit. Is it 4 billion oids for the whole database, or just per table? I keep coming back to this line, from https://wiki.postgresql.org/wiki/TOAST :

"If you exceed 4 billion of these rows across all tables (remember this is a global shared wrapping counter), then the OID counter "wraps" which might cause significant slowdown as you approach the 4B row limit."

We are actually in the process of attempting to create new tables (one for each of these large columns in this large table), and this process may have actually accelerated our use of oids, if it's a global pool.

Thanks for your help!
Natalie

> On Oct 26, 2016, at 2:22 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Natalie Wenz <nataliewenz(at)ebureau(dot)com> writes:
>> I am seeing some performance issues that I'm trying to track down on a large database. One of the things I'm beginning to suspect is a particularly large table with many columns, about 200 of which (type text) contain large chunks of data. Now, for a given row, maybe 10-30 of those columns contain data, so not all 200 for each row, but the data can still be pretty sizable. There are currently around 750 million records in this table (and is about 22TB in size). I was trying to learn more about toast, and I see some references in the wiki and the hackers list to performance issues when you approach the 4 billion oids for a single table (which, I gather, are used when the data is toasted). Given my rudimentary understanding of how the whole toast thing works, I was wondering if there is a way to see how many oids are used for a table, or another way to know if we're running into toast limits for a single table.
>
> You could do
>
> select reltoastrelid::regclass from pg_class where relname = 'problem_table';
>
> which will give you something like
>
> reltoastrelid
> -------------------------
> pg_toast.pg_toast_78004
>
> and then poke into the contents of that table. Probably
>
> select count(distinct chunk_id) from pg_toast.pg_toast_78004;
>
> would answer your immediate question.
>
> regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2016-10-26 22:35:36 Re: pg_toast oid limits
Previous Message Poul Kristensen 2016-10-26 21:23:38 Re: PostgresSQL 9.5 and systemd autorestart but without replication.