From: | Josh Jore <josh(at)greentechnologist(dot)org> |
---|---|
To: | |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Efficient use of space in large table? |
Date: | 2002-08-10 12:30:45 |
Message-ID: | Pine.BSO.4.44.0208100727040.4480-100000@kitten.greentechnologist.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 5 Jul 2002, Manfred Koizar wrote:
> On Thu, 4 Jul 2002 21:43:10 -0500 (CDT), Josh Jore
> <josh(at)greentechnologist(dot)org> wrote:
> >I was just wondering - I've got two large tables and I was wondering
> >if there is anyway to shrink them somewhat. I imagined compression for
> >non-indexed columns or something. Is varchar or char more efficient than
> >text?
> >
> Josh,
>
> first of all, text is ok. You might want to store NULL instead of ''
> to squeeze out a few bytes here and there.
I just thought I'd follow up - it turns out that most of my space was
going to tuple headers (some 40ish bytes header, 16 bytes data). I just
took the data out of PostgreSQL and stuck it into partitioned ASCII files
and BerkeleyDB for indexes. That happens to work excellently and doesn't
require as fancy a machine as PostgreSQL did.
So the answer is to sometimesquestion your choice of tool ;-)
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Copeland | 2002-08-10 14:21:07 | Re: [GENERAL] Linux Largefile Support In Postgresql RPMS |
Previous Message | Jeff MacDonald | 2002-08-10 12:17:28 | Re: I am being interviewed by OReilly |