Re: perf problem with huge table

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: rama <rama(dot)rama(at)tiscali(dot)it>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: perf problem with huge table
Date: 2010-02-11 01:42:13
Message-ID: 4B736075.1080704@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

rama wrote:
> in that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y
> if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between
> the different table using some huge (but fast) queries.
>
> Now, the matter is that this design is hard to mantain, and the tables are difficult to check
>

You sound like you're trying to implement something like materialized
views one at a time; have you considered adopting the more general
techniques used to maintain those so that you're not doing custom
development each time for the design?

http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views
http://www.pgcon.org/2008/schedule/events/69.en.html

I think that sort of approach is more practical than it would have been
for you in MySQL, so maybe this wasn't on your list of possibilities before.

--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2010-02-11 01:46:03 Re: Linux I/O tuning: CFQ vs. deadline
Previous Message Dave Crooke 2010-02-11 00:51:30 Re: perf problem with huge table