Re: Table partition for very large table

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: Yudie Gunawan <yudiepg(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Table partition for very large table
Date: 2005-03-28 20:18:23
Message-ID: 1112041102.22988.4.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 2005-03-28 at 13:50, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain <query> tell you? Have you analyzed the database?
>
>
> This is the QUERY PLAN
> Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
> Hash Cond: (("outer".groupnum = "inner".groupnum) AND
> (("outer".sku)::text = ("inner".sku)::text))
> Filter: (("inner".url IS NULL) OR (("inner".url)::text = ''::text))
> -> Seq Scan on prdt_old mc (cost=0.00..288349.86 rows=4979686 width=17)
> -> Hash (cost=20.00..20.00 rows=1000 width=78)
> -> Seq Scan on prdt_new mi (cost=0.00..20.00 rows=1000 width=78)
>
>
> > What are your postgresql.conf settings?
>
> What suspected specific setting need to be changed?

sort_mem also known as work_mem (in 8.0)

Also, this is important, have you anayzed the table? I'm guessing no,
since the estimates are 1,000 rows, but the has join is getting a little
bit more than that. :)

Analyze your database and then run the query again.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Martijn van Oosterhout 2005-03-28 20:19:09 Re: Table partition for very large table
Previous Message Yudie Gunawan 2005-03-28 19:50:06 Re: Table partition for very large table