From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Steve Wampler <swampler(at)noao(dot)edu> |
Cc: | postgres-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Speed of locating tables? |
Date: | 2000-05-26 15:46:01 |
Message-ID: | 2151.959355961@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Steve Wampler <swampler(at)noao(dot)edu> writes:
> To me, the most natural way to encode the sets is to
> create a separate table for each set, since the attributes
> can then be indexed and referenced quickly once the table
> is accessed. But I don't know how fast PG is at locating
> a table, given its name.
> So, to refine the question - given a DB with (say) 100,000
> tables, how quickly can PG access a table given its name?
Don't even think about 100000 separate tables in a database :-(.
It's not so much that PG's own datastructures wouldn't cope,
as that very few Unix filesystems can cope with 100000 files
in a directory. You'd be killed on directory search times.
I don't see a good reason to be using more than one table for
your attributes --- add one more column to what you were going
to use, to contain an ID for each attribute set, and you'll be
a lot better off. You'll want to make sure there's an index
on the ID column, of course, or on whichever columns you plan
to search by.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Wampler | 2000-05-26 15:52:07 | Re: Speed of locating tables? |
Previous Message | Bill Barnes | 2000-05-26 15:43:25 | RE: PG 7.0 is 2.5 times slower running a big report |