From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Hugo <hugo(dot)tech(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Thousands of schemas and ANALYZE goes out of memory |
Date: | 2012-10-04 16:17:34 |
Message-ID: | CAMkU=1wLjAsmJNuB6ZObZmGHqi9jLbK6n1eSgnOc5J1-AUsvUA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Oct 2, 2012 at 5:09 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> I don't know how the transactionality of analyze works. I was
> surprised to find that I even could run it in an explicit transaction
> block, I thought it would behave like vacuum and create index
> concurrently in that regard.
>
> However, I think that that would not solve your problem. When I run
> analyze on each of 220,000 tiny tables by name within one session
> (using autocommit, so each in a transaction), it does run about 4
> times faster than just doing a database-wide vacuum which covers those
> same tables. (Maybe this is the lock/resource manager issue that has
> been fixed for 9.3?)
For the record, the culprit that causes "analyze;" of a database with
a large number of small objects to be quadratic in time is
"get_tabstat_entry" and it is not fixed for 9.3.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleas Mantzios | 2012-10-04 16:38:09 | Re: Moving from Java 1.5 to Java 1.6 |
Previous Message | Shaun Thomas | 2012-10-04 15:45:43 | Re: Moving from Java 1.5 to Java 1.6 |