From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | Hanno Schlichting <hanno(at)hannosch(dot)eu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Tweaking bytea / large object block sizes? |
Date: | 2011-06-12 23:59:33 |
Message-ID: | 4DF552E5.7070101@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 06/13/2011 12:00 AM, Hanno Schlichting wrote:
> But from what I read of Postgres, my best bet is to store data as
> large objects [2]. Going all the way down this means storing the
> binary data as 2kb chunks and adding table row overhead for each of
> those chunks. Using the bytea type and the toast backend [3] it seems
> to come down to the same: data is actually stored in 2kb chunks for a
> page size of 8kb.
This is probably much less of a concern than you expect. Consider that
your file system almost certainly stores file data in chunks of between
512 bytes and 4kb (the block size) and performs just fine.
Given the file sizes you're working with, I'd try using `bytea' and see
how you go. Put together a test or simulation that you can use to
evaluate performance if you're concerned.
Maybe one day Linux systems will have a file system capable of
transactional behaviour like NTFS is, so Pg could integrate with the
file system for transactional file management. In the mean time, `bytea'
or `lo' seem to be your best bet.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2011-06-13 01:27:45 | Re: Tweaking bytea / large object block sizes? |
Previous Message | jonathansfl | 2011-06-12 23:32:30 | Can't drop temp table in subfunction during cursor loop (being used by active queries) |