Re: Any risk in increasing BLCKSZ to get larger tuples?

From: "Steve Wolfe" <steve(at)iboats(dot)com>
To: "PostgreSQL General" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Any risk in increasing BLCKSZ to get larger tuples?
Date: 2000-10-19 20:34:08
Message-ID: 000901c03a0b$f2015a00$50824e40@iboats.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> > A trick you can use in 7.0.* to squeeze out a little more space is
> > to declare your large text fields as "lztext" --- this invokes
> > inline compression, which might get you a factor of 2 or so on typical
> > mail messages. lztext will go away again in 7.1, since TOAST supersedes
> > it,
>
> Uh, why. Does TOAST do automatic compression? If people need to store
> huge blocks of text (like a DNA sequence) inline compression isn't just
> a hack to squeeze bigger text into a tuple.

I'd guess that it's a speed issue. Decompressing everything in the table
for every select sounds like a great waste of CPU power, to me, especially
when hard drives and RAM are cheap. Kind of like the idea of "drivespace"
on Windows - nice idea, but it slowed things down quite a bit.

steve

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Neil Conway 2000-10-19 20:34:47 Re: Any risk in increasing BLCKSZ to get larger tuples?
Previous Message Joseph Shraibman 2000-10-19 20:31:11 Re: vacuumdb can't find libraries