Re: Column as arrays.. more efficient than columns?

From: Ow Mun Heng <Ow(dot)Mun(dot)Heng(at)wdc(dot)com>
To: Michael Glaesemann <grzm(at)seespotcode(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Column as arrays.. more efficient than columns?
Date: 2007-09-07 02:26:26
Message-ID: 1189131986.17218.35.camel@neuromancer.home.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 2007-09-06 at 20:57 -0500, Michael Glaesemann wrote:
> On Sep 6, 2007, at 20:46 , Ow Mun Heng wrote:

> > I would believe performance would be better it being denormalised. (in
> > this case)
>
> I assume you've arrived at the conclusion because you have
> (a) shown
> that the performance with a normalized schema does not meet your
> needs;
> (b) benchmarked the normalized schema under production
> conditions;
> (c) benchmarked the denormalized schema under production
> conditions; and
> (d) shown that performance is improved in the
> denormalized case to arrive at that conclusion. I'm interested to see
> the results of your comparisons.

> Regardless, it sounds like you've already made up your mind. Why ask
> for comments?

You've assumed wrong. I've not arrived at any conclusion but merely
exploring my options on which way would be the best to thread. I'm
asking the list because I'm new in PG and after reading all those
articles on highscalability etc.. majority of them are all using some
kind of denormalised tables.

Right now, there's 8 million rows of data in this one table, and growing
at a rapid rate of ~2 million/week. I can significantly reduce this
number down to 200K (i think by denormalising it) and shrink the table
size.

I would appreciate your guidance on this before I go knock my head on
the wall. :-)

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Yudie Pg 2007-09-07 02:29:07 array_to_records function
Previous Message Tom Lane 2007-09-07 02:25:05 Re: log_statement and PREPARE