> A couple of further notes --- there are Unix filesystems that don't suck
> with large directories, but I'm not sure whether any of the ones in
> common use have smart directory handling. The typical approach is that
> file opens, creates, deletes require a linear scan of the directory.
Linux (kernel) 2.2.x or later (including 2.4.x) have a directory cache
(I think hashed, but could also be btree) which means that subsequent
file opens are (very) fast, and very large directories are not a problem,
provided the cache doesn't age sufficiently to be discarded.
How does Postgres do its file handling? How many files can it have open
simultaneously?
Cheers,
Colin