From: | "Florian G(dot) Pflug" <fgp(at)phlo(dot)org> |
---|---|
To: | Christopher Browne <cbbrowne(at)acm(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Scalability with large numbers of tables |
Date: | 2005-02-21 14:02:11 |
Message-ID: | 4219E9E3.6050408@phlo.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Christopher Browne wrote:
> Oops! spam_from_postgresql_general(at)chezphil(dot)org (Phil Endecott) was seen spray-painting on a wall:
>>I have a single database with one schema per user. Each user has a
>>handful of tables, but there are lots of users, so in total the
>>database has thousands of tables.
> If you've got tens of thousands of relations, the tab completion code
> has to draw the whole list of relations from pg_class into memory and
> "marshal" it into a form usable by GNU Readline. THAT is what you're
> seeing slow down. As the number of tables, n, grows, the cost of that
> grows with order of complexity O(n).
>
> Actual queries on actual tables won't be slow; they will look up
> relation names directly in pg_class, and presumably go from there to
> get the file name(s) on the filesystem, which each represent
> operations of complexity of order O(n log n). Which remains fast even
> if there are millions of tables.
I guess you mean O(log n) in the second paragraph (Which would imply
that there is an index on relname for pg_class). If the complexity was
O(n log n), it would be more complex/slower than an O(n) algorithm, and
therefore slower (or, at least, it would scale worse) than the
tab-completion code ;-)
greetings, Florian Pflug
From | Date | Subject | |
---|---|---|---|
Next Message | Markus Wollny | 2005-02-21 14:14:19 | Re: Ways to speed up dump&reload |
Previous Message | Andreas Hartmann | 2005-02-21 13:52:22 | Different execution time from psql and JDBC |