From: | Bill Moran <wmoran(at)potentialtech(dot)com> |
---|---|
To: | "Leonardo M(dot) Ramé" <l(dot)rame(at)griensu(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Compression function |
Date: | 2015-06-16 11:40:43 |
Message-ID: | 20150616074043.71096d466cea363ab6af52a7@potentialtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 16 Jun 2015 04:45:52 -0300
"Leonardo M. Ramé" <l(dot)rame(at)griensu(dot)com> wrote:
> Hi, does anyone know if there's a compression function to let me store
> in gzipped/deflate format TEXT or Bytea fields.
>
> Please correct me if I'm wrong, but I also wonder if this function is
> really needed since I've read large objects are stored with TOAST, hence
> compression is already there.
The TOAST system does do compression, but depending on your expectation,
you may be disappointed.
The big thing that might let you down is that the TOAST code doesn't run
at all unless the tuple is larger than 2K. As a result, you could have
fairly large rows of almost 2000 bytes long, that _could_ compress to
significantly less than that, but PostgreSQL never tries to compress.
Additionally, PostgreSQL stops trying to compress fields once the row size
is smaller than 2K, so if you have multiple fields that could benefit from
compression, they might not all be compressed.
As a result, if you understand your data well, you need to take this into
account, as you might see better results if you do your own compression.
Unfortunately, I don't know of any in-database function that can be used
to compress data; you'd have to write your own or do it at the application
level.
--
Bill Moran <wmoran(at)potentialtech(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | mephysto | 2015-06-16 13:02:32 | Re: FW: PostgreSQL and iptables |
Previous Message | Craig Ringer | 2015-06-16 10:46:15 | Re: BDR: Can a node live alone after being detached |