From: | "Sorbara, Giorgio (CIOK)" <Giorgio(dot)Sorbara(at)fao(dot)org> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Strange query plan |
Date: | 2011-11-04 16:14:48 |
Message-ID: | 9B1BED361D2D674EB623A26D1F045D001AD67D18BF@HQEXDB01.hq.un.fao.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> -----Original Message-----
> From: Robert Haas [mailto:robertmhaas(at)gmail(dot)com]
> Sent: 04 November 2011 5:07 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: Tomas Vondra; pgsql-performance(at)postgresql(dot)org
> Subject: Re: [PERFORM] Strange query plan
>
> On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
> <Giorgio(dot)Sorbara(at)fao(dot)org> wrote:
> > Group (cost=0.00..4674965.80 rows=200 width=17) (actual
> time=13.375..550943.592 rows=1 loops=1)
> > -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual
> time=13.373..524324.817 rows=125595932 loops=1)
> > -> Index Scan using f_suipy_pkey on f_suipy
> (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0
> loops=1)
> > Index Cond: ((fk_theme)::text =
> 'main_py_six_scxc'::text)
> > -> Seq Scan on f_suipy_main_py_six_scxc f_suipy
> (cost=0.00..4360970.30 rows=125595944 width=17) (actual
> time=13.352..495259.117 rows=125595932 loops=1)
> > Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)
> > Total runtime: 550943.699 ms
>
> How fast do you expect this to run? It's aggregating 125 million
> rows, so that's going to take some time no matter how you slice it.
> Unless I'm misreading this, it's actually taking only about 4
> microseconds per row, which does not obviously suck.
Well, the problem is not how fast it takes to process one row rather the best query plan I am supposed to get. I don't mean the planer is wrong but I was expecting a feature is not there (yet).
We don't have pure index scan. Fair enough. so I have approached the problem in a different way: getting rid of the degenerated dimensions and exploiting "useless" dimension table.
It's a workaround but it actually seems to work :) now I have a ~350 millions fact table and no partition but I am happy to get the data I want in 1 sec or less.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-11-04 16:26:57 | Re: Blocking excessively in FOR UPDATE |
Previous Message | Robert Haas | 2011-11-04 16:10:37 | Re: Blocking excessively in FOR UPDATE |