From: | Wojciech Strzałka <wstrzalka(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Give me a HINT or I'll got crazy ;) |
Date: | 2009-10-09 07:56:41 |
Message-ID: | 538398034.20091009095641@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In the madness I did
- setting statistics target to 1000 for all join & filter columns
- cluster the playing tables
- reindex the playing tables
- analyze the playing tables
and it helped now. I'm at ~50ms which satisfies me completely.
If no hints - some debug for explain would be great to be able to
track what's wrong for such a lame developers like me ;)
The problem is solved but I can not tell that I understand why it was
wrong before - and why it's OK now :(
> wstrzalka <wstrzalka(at)gmail(dot)com> writes:
>> Prior to the playing with statistics target (it was 100 by default) I
>> was able to go with the time to 30ms by adding to the query such a
>> condition:
> So what sort of "playing" did you do? It looks to me like the core of
> the problem is the sucky join size estimate here:
>> -> Hash Join (cost=101.53..15650.39 rows=95249 width=8) (actual
>> time=1102.977..1342.675 rows=152 loops=1)
>> Hash Cond: (mal.message_id = m.messageid)
> If it were correctly estimating that only a few message_address_link
> rows would join to each messages row, it'd probably do the right thing.
> But it seems to think there will be thousands of joins for each one...
> regards, tom lane
--
Pozdrowienia,
Wojciech Strzałka
From | Date | Subject | |
---|---|---|---|
Next Message | Tore Halvorsen | 2009-10-09 08:34:30 | pg_stat_statements and slony |
Previous Message | Mike Christensen | 2009-10-09 07:10:41 | Re: Best data type to use for sales tax percent |