Re: query rewrite using materialized views

From: Rod Taylor <pg(at)rbt(dot)ca>
To: "Wager, Ryan D [NTK]" <Ryan(dot)D(dot)Wager(at)mail(dot)sprint(dot)com>
Cc: Postgresql Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: query rewrite using materialized views
Date: 2005-01-04 19:54:22
Message-ID: 1104868462.37702.114.camel@home
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, 2005-01-04 at 13:26 -0600, Wager, Ryan D [NTK] wrote:
> Rod,
> I do this, PG gets forked many times, it is tough to find the max
> number of times I can do this, but I have a Proc::Queue Manager Perl
> driver that handles all of the copy calls. I have a quad CPU machine.
> Each COPY only hits ones CPU for like 2.1% but anything over about 5
> kicks the load avg up.

Sounds like disk IO is slowing down the copy then.

> Ill get some explain analysis and table structures out there pronto.
>
> -----Original Message-----
> From: Rod Taylor [mailto:pg(at)rbt(dot)ca]
> Sent: Tuesday, January 04, 2005 1:02 PM
> To: Wager, Ryan D [NTK]
> Cc: Postgresql Performance
> Subject: Re: [PERFORM] query rewrite using materialized views
>
> > 1)the 250 million records are currently whipped and reinserted as a
> > "daily snapshot" and the fastest way I have found "COPY" to do this
> from
> > a file is no where near fast enough to do this. SQL*Loader from
> Oracle
> > does some things that I need, ie Direct Path to the db files access
> > (skipping the RDBMS), inherently ignoring indexing rules and saving a
> > ton of time (Dropping the index, COPY'ing 250 million records, then
> > Recreating the index just takes way too long).
>
> If you have the hardware for it, instead of doing 1 copy, do 1 copy
> command per CPU (until your IO is maxed out anyway) and divide the work
> amongst them. I can push through 100MB/sec using methods like this --
> which makes loading 100GB of data much faster.
>
> Ditto for indexes. Don't create a single index on one CPU and wait --
> send off one index creation command per CPU.
>
> > 2)Finding a way to keep this many records in a fashion that can be
> > easily queried. I even tried breaking it up into almost 2800 separate
> > tables, basically views of the data pre-broken down, if this is a
> > working method it can be done this way, but when I tried it, VACUUM,
> and
> > the COPY's all seemed to slow down extremely.
>
> Can you send us EXPLAIN ANALYSE output for the slow selects and a little
> insight into what your doing? A basic table structure, and indexes
> involved would be handy. You may change column and table names if you
> like.
>
> > -----Original Message-----
> > From: pgsql-performance-owner(at)postgresql(dot)org
> > [mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of Josh
> Berkus
> > Sent: Tuesday, January 04, 2005 12:06 PM
> > To: pgsql-performance(at)postgresql(dot)org
> > Cc: Yann Michel
> > Subject: Re: [PERFORM] query rewrite using materialized views
> >
> > Yann,
> >
> > > are there any plans for rewriting queries to preexisting
> materialized
> > > views? I mean, rewrite a query (within the optimizer) to use a
> > > materialized view instead of the originating table?
> >
> > Automatically, and by default, no. Using the RULES system? Yes, you
> > can
> > already do this and the folks on the MattView project on pgFoundry are
>
> > working to make it easier.
> >
--

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Christopher Browne 2005-01-04 19:58:04 Re: Very Bad Performance.
Previous Message Wager, Ryan D [NTK] 2005-01-04 19:26:36 Re: query rewrite using materialized views