| From: | "Colin 't Hart" <cthart(at)yahoo(dot)com> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Max number of tables in a db? |
| Date: | 2001-08-17 15:40:12 |
| Message-ID: | 9ljdus$2j3t$1@news.tht.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> A couple of further notes --- there are Unix filesystems that don't suck
> with large directories, but I'm not sure whether any of the ones in
> common use have smart directory handling. The typical approach is that
> file opens, creates, deletes require a linear scan of the directory.
Linux (kernel) 2.2.x or later (including 2.4.x) have a directory cache
(I think hashed, but could also be btree) which means that subsequent
file opens are (very) fast, and very large directories are not a problem,
provided the cache doesn't age sufficiently to be discarded.
How does Postgres do its file handling? How many files can it have open
simultaneously?
Cheers,
Colin
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Chris Mulcahy | 2001-08-17 16:27:47 | Postmaster not stopping |
| Previous Message | Tom Lane | 2001-08-17 15:23:47 | Re: assigning result of SELECT in TRIGGER |