From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Scott Carey <scott(at)richrelevance(dot)com> |
Cc: | Merlin Moncure <mmoncure(at)gmail(dot)com>, Scott Otis <scott(dot)otis(at)intand(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Databases vs Schemas |
Date: | 2009-10-10 03:11:35 |
Message-ID: | 9580.1255144295@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Scott Carey <scott(at)richrelevance(dot)com> writes:
> I've got 200,000 tables in one db (8.4), and some tools barely work. The
> system catalogs get inefficient when large and psql especially has trouble.
> Tab completion takes forever, even if I make a schema "s" with one table in
> it and type "s." and try and tab complete -- its as if its scanning all
> without a schema qualifier or using an index.
The tab-completion queries have never been vetted for performance
particularly :-(
Just out of curiosity, how much does this help?
alter function pg_table_is_visible(oid) cost 10;
(You'll need to do it as superuser --- if it makes things worse, just
set the cost back to 1.)
> Sometimes it does not match
> valid tables at all, and sometimes regex matching fails too ('\dt
> schema.*_*_*' intermittently flakes out if it returns a lot of matches).
There are some arbitrary "LIMIT 1000" clauses in those queries, which
probably explains this ... but taking them out would likely cause
libreadline to get indigestion ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | tsuraan | 2009-10-10 05:14:27 | Re: UUID as primary key |
Previous Message | Scott Carey | 2009-10-10 02:50:42 | Re: Databases vs Schemas |