From: | Greg Spiegelberg <gspiegelberg(at)gmail(dot)com> |
---|---|
To: | pgsql-performance <pgsql-performance(at)postgresql(dot)org>, Craig James <cjames(at)emolecules(dot)com> |
Subject: | Millions of relations (from Maximum number of sequences that can be created) |
Date: | 2012-05-25 15:52:22 |
Message-ID: | CAEtnbpV1Q0=e=OwG09L8YrMM3JJXG+dkCjwgLQZyBoeJczRHsQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, May 25, 2012 at 9:04 AM, Craig James <cjames(at)emolecules(dot)com> wrote:
> On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <gspiegelberg(at)gmail(dot)com>
> wrote:
>
>> On Sun, May 13, 2012 at 10:01 AM, Craig James <cjames(at)emolecules(dot)com>
>> wrote:
>>
>>>
>>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <tivv00(at)gmail(dot)com>
>>> wrote:
>>>
>>>>
>>>> The sequences AFAIK are accounted as relations. Large list of relations
>>>> may slowdown different system utilities like vacuuming (or may not, depends
>>>> on queries and indexes on pg_class).
>>>>
>>>
>>> Not "may slow down." Change that to "will slow down and possibly
>>> corrupt" your system.
>>>
>>> In my experience (PG 8.4.x), the system can handle in the neighborhood
>>> of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the
>>> system becomes unusable. It's not that it stops working -- day-to-day
>>> operations such as querying your tables and running your applications
>>> continue to work. But system operations that have to scan for table
>>> information seem to freeze (maybe they run out of memory, or are
>>> encountering an O(N^2) operation and simply cease to complete).
>>>
>>
>> Glad I found this thread.
>>
>> Is this 1M relation mark for the whole database cluster or just for a
>> single database within the cluster?
>>
>
> I don't know. When I discovered this, our system only had a few dozen
> databases, and I never conducted any experiments. We had to write our own
> version of pg_dump to get the data out of the damaged system, and then
> reload from scratch. And it's not a "hard" number. Even at a million
> relation things work ... they just bog down dramatically. By the time I
> got to 5 million relations (a rogue script was creating 50,000 tables per
> day and not cleaning up), the system was effectively unusable.
>
This is somewhat disturbing. I need to throw out a question I hope for an
answer.
Has anyone ever witnessed similar behavior with a large number of
relations? Slow? Corruption?
-Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2012-05-25 15:53:45 | Re: pg_dump and thousands of schemas |
Previous Message | Jeff Janes | 2012-05-25 15:40:04 | Re: pg_dump and thousands of schemas |