From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Peter Haight <peterh(at)sapros(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Large object insert performance. |
Date: | 2000-08-24 04:18:41 |
Message-ID: | 23076.967090721@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Peter Haight <peterh(at)sapros(dot)com> writes:
> All I'm doing is inserting the large objects.
How many LOs are we talking about here?
The current LO implementation creates a separate table, with index,
for each LO. That means two files in the database directory per LO.
On most Unix filesystems I've dealt with, performance will go to hell
in a handbasket for more than a few thousand files in one directory.
Denis Perchine did a reimplementation of LOs to store 'em in a single
table. This hasn't been checked or applied to current sources yet,
but if you're feeling adventurous see the pgsql-patches archives from
late June.
> Is there any way to speed this up? If the handling of large objects is this
> bad, I think I might just store these guys on the file system.
You could do that too, if you don't need transactional semantics for
large-object operations.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | anuj | 2000-08-24 04:30:34 | FW: Count & Distinct |
Previous Message | Tom Lane | 2000-08-24 03:43:45 | Re: Count & Distinct |