Re: how to improve perf of 131MM row table?

From: AJ Weber <aweber(at)comcast(dot)net>
To: Shaun Thomas <sthomas(at)optionshouse(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: how to improve perf of 131MM row table?
Date: 2014-06-26 13:26:06
Message-ID: 53AC1F6E.4000700@comcast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

OK, the sample query is attached (hopefully attachments are allowed) as
"query.sql".
The "master table" definition is attached as "table1.sql".
The "detail table" definition is attached as "table2.sql".
The EXPLAIN (ANALYZE, BUFFERS) output is here:
http://explain.depesz.com/s/vd5

Let me know if I can provide anything else, and thank you again.

-AJ

On 6/25/2014 5:55 PM, Shaun Thomas wrote:
> On 06/25/2014 04:40 PM, Aaron Weber wrote:
>
>> In the meantime, I guess I wasn't clear about some other particulars
>> The query's where clause is only an "IN", with a list of id's (those
>> I mentioned are the PK), and the join is explicitly on the PK (so,
>> indexed).
>
> Indexed doesn't mean indexed if the wrong datatypes are used. We need
> to see the table and index definitions, and a sample query with
> EXPLAIN ANALYZE output.
>
>> An IN with 50 int values took 23sec to return (by way of example).
>
> To me, this sounds like a sequence scan, or one of your key matches so
> many rows, the random seeks are throwing off your performance. Of
> course, I can't confirm that without EXPLAIN output.
>

Attachment Content-Type Size
query.sql text/plain 1.4 KB
table1.sql text/plain 2.6 KB
table2.sql text/plain 1.5 KB

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Matheus de Oliveira 2014-06-26 13:56:47 Re: how to improve perf of 131MM row table?
Previous Message Aaron Weber 2014-06-25 22:29:40 Re: how to improve perf of 131MM row table?