From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Putting many related fields as an array |
Date: | 2009-05-12 13:10:43 |
Message-ID: | 20090512131043.GE22221@samason.me.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, May 12, 2009 at 08:06:25PM +0800, Ow Mun Heng wrote:
> From: pgsql-general-owner(at)postgresql(dot)org [mailto:pgsql-general-
> On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
> >Not sure why this is better than using separate columns though. Maybe a
> >new datatype and a custom aggregate would be easier to work with?
>
> The issue here is the # of columns needed to populate the table.
>
> The table I'm summarizing has close to between 50 to 100+ columns, if the
> 1:5x is used as a yardstick, then the table will get awfully wide quickly.
>
> I need to know how to do it first, then test accordingly for performance and
> corner cases.
Yes, those are going to be pretty wide tables! Maybe if you can make
the source tables a bit "narrower" it will help things; PG has to read
entire rows from the table, so if your queries are only touching a few
columns then it's going to need a lot more disk bandwidth to get a
specific number of rows back from the table.
--
Sam http://samason.me.uk/
From | Date | Subject | |
---|---|---|---|
Next Message | Sam Mason | 2009-05-12 13:31:20 | Re: Postgres BackUp and Restore: ERROR: duplicate key violates unique constraint "pg_largeobject_loid_pn_index" |
Previous Message | Thomas Markus | 2009-05-12 12:49:36 | Re: Cannot login for short period of time |