From: | higepon <higepon(at)gmail(dot)com> |
---|---|
To: | Greg Stark <stark(at)enterprisedb(dot)com> |
Cc: | Itagaki Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Extra cost of "lossy mode" Bitmap Scan plan |
Date: | 2009-04-28 08:32:14 |
Message-ID: | f07386410904280132h7a2579f2te260896bc43390dd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi.
> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.
I see.
> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.
Hmm. Is this really a narrow corner case?
At least I and ITAGAKI-san met.
I think the default work_mem value (1MB) may cause these issues.
Do you think there's no need for the planner to know
whether the plan is lossy or not ?
If the planner could know the costs of lossy mode, it may choose
better plans than now.
Or the second option is to show some hints to people who are doing
performance tuning.
(a) Write trace log when bitmap scans falls into "lossy" mode.
(b) Show "lossy" or not on Explain results.
Best regards,
-----
MINOWA Taro (Higepon)
Cybozu Labs, Inc.
http://www.monaos.org/
http://code.google.com/p/mosh-scheme/
On Tue, Apr 28, 2009 at 4:51 PM, Greg Stark <stark(at)enterprisedb(dot)com> wrote:
> On Tue, Apr 28, 2009 at 7:45 AM, higepon <higepon(at)gmail(dot)com> wrote:
>> "How much extra cost should we add for lossy mode?".
>
> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.
>
> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.
>
> If the logic you're suggesting would kick in at all it would be for a
> narrow range of scan sizes, so the net effect would be to use an index
> scan for small scans, then switch to a bitmap scan, then switch back
> to an index scan when the bitmap scan becomes lossy, then switch back
> to a lossy bitmap scan for large scans. I'm thinking that even if it's
> slightly faster when the planner has perfect inputs the downsides of
> switching back and forth might not be worth it. Especially when you
> consider than the planner is often going on approximate estimates and
> it is probably not switching in precisely the right spot.
>
> --
> greg
>
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2009-04-28 08:33:28 | Keyword list sanity check |
Previous Message | Sam Halliday | 2009-04-28 08:24:45 | Re: RFE: Transparent encryption on all fields |