From: | Jacqui Caren-home <jacqui(dot)caren(at)ntlworld(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How Big is Too Big for Tables? |
Date: | 2010-07-29 15:59:07 |
Message-ID: | 4C51A54B.6010509@ntlworld.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
P Kishor wrote:
> On Wed, Jul 28, 2010 at 1:38 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> * P Kishor (punk(dot)kish(at)gmail(dot)com) wrote:
>>> Three. At least, in my case, the overhead is too much. My data are
>>> single bytes, but the smallest data type in Pg is smallint (2 bytes).
>>> That, plus the per row overhead adds to a fair amount of overhead.
>> My first reaction to this would be- have you considered aggregating the
>> data before putting it into the database in such a way that you put more
>> than 1 byte of data on each row..? That could possibly reduce the
>> number of rows you have by quite a bit and also reduce the impact of the
>> per-tuple overhead in PG..
> each row is half a dozen single byte values, so, it is actually 6
> bytes per row (six columns).
Hmm six chars - this would not perchance be bio (sequence) or geospacial data?
If so then there are specialist lists out there that can help.
Also quite a few people use Pg for this data and there are some very neat Pg add ons.
Jacqui
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2010-07-29 16:11:37 | Re: Danger of idiomatic plpgsql loop for merging data |
Previous Message | Joshua D. Drake | 2010-07-29 15:52:46 | Re: Which CMS/Ecommerce/Shopping cart ? |