From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Leonardo Francalanci <lfrancalanci(at)simtel(dot)ie> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: R: space taken by a row & compressed data |
Date: | 2004-08-26 16:08:43 |
Message-ID: | 200408261608.i7QG8hL07139@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Leonardo Francalanci wrote:
> > We have an FAQ item about this.
>
> Damn! I didn't see that one! Sorry...
>
> > Long data values are automatically compressed.
>
> The reason I'm asking is:
> we have a system that stores 200,000,000 rows per month
> (other tables store 10,000,000 rows per month)
> Every row has 400 columns of integers + 2 columns (date+integer) as index.
>
> Our system compresses rows before writing them to a binary file on disk.
> Data don't usually need to be updated/removed.
> We usually access all columns of a row (hence compression on a per-row basis
> makes sense).
>
> Is there any way to compress data on a per-row basis? Maybe with
> a User-Defined type?
Ah, we only compress long row values, which integers would not be. I
don't see any way to compress an entire row even with a user-defined
type unless you put multiple values into a single column and compress
those as a single value. In fact, if you used an array or some special
data type it would become a long value and would be automatically
compressed.
However, as integers, there would have to be a lot of duplicate values
before compression would be a win.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-08-26 16:23:50 | Re: R: space taken by a row & compressed data |
Previous Message | Martha Chronopoulou | 2004-08-26 15:46:14 | SPI query... |