overhead of "small" large objects

From: Philip Crotwell <crotwell(at)seis(dot)sc(dot)edu>
To: pgsql-general(at)postgresql(dot)org
Cc: Philip Crotwell <crotwell(at)seis(dot)sc(dot)edu>
Subject: overhead of "small" large objects
Date: 2000-12-10 19:13:12
Message-ID: Pine.GSO.4.10.10012101404140.4870-100000@tigger.seis.sc.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


Hi

I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large?

What might I be doing wrong?

Is there a better way to store these chunks?

thanks,
Philip

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2000-12-10 20:06:01 Re: overhead of "small" large objects
Previous Message Robert B. Easter 2000-12-10 14:44:55 Re: Simple Question: Case sensitivity