Re: Any risk in increasing BLCKSZ to get larger tuples?

From: Joseph Shraibman <jks(at)selectacast(dot)net>
To:
Cc: PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Any risk in increasing BLCKSZ to get larger tuples?
Date: 2000-10-19 22:11:26
Message-ID: 39EF718E.4DD8B9C@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Steve Wolfe wrote:
>
> > > A trick you can use in 7.0.* to squeeze out a little more space is
> > > to declare your large text fields as "lztext" --- this invokes
> > > inline compression, which might get you a factor of 2 or so on typical
> > > mail messages. lztext will go away again in 7.1, since TOAST supersedes
> > > it,
> >
> > Uh, why. Does TOAST do automatic compression? If people need to store
> > huge blocks of text (like a DNA sequence) inline compression isn't just
> > a hack to squeeze bigger text into a tuple.
>
> I'd guess that it's a speed issue. Decompressing everything in the table
> for every select sounds like a great waste of CPU power, to me, especially
> when hard drives and RAM are cheap. Kind of like the idea of "drivespace"
> on Windows - nice idea, but it slowed things down quite a bit.

In some cases yes, in some no. Simple text should compress/decompress
quickly and the cpu time wasted is made up for by less hardware access
time and smaller db files. If you have a huge database the smaller db
files could be critical.

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Diehl, Jeffrey 2000-10-19 22:13:36 RE: MySQL -> pgsql
Previous Message Joseph Shraibman 2000-10-19 22:07:57 Re: Any risk in increasing BLCKSZ to get larger tuples?