From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: GIN improvements part 3: ordering in index |
Date: | 2013-06-25 15:31:39 |
Message-ID: | 51C9B7DB.1080706@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 25.06.2013 01:24, Alexander Korotkov wrote:
> On Wed, Jun 19, 2013 at 1:21 AM, Alexander Korotkov<aekorotkov(at)gmail(dot)com>wrote:
>
>> On Mon, Jun 17, 2013 at 10:27 PM, Heikki Linnakangas<
>> hlinnakangas(at)vmware(dot)com> wrote:
>>
>> That has some obvious limitations. First of all, you can run out of
>>> memory.
>>
>> Yes, it is so. qsort should be replaced with tuplesort.
>
> In attached patch qsort is replaced with tuplesort. As expected, it leads
> to some performance drawback, but it's not dramatic.
> Also, some doc is added for new distance method of GIN.
Thanks. But the fact remains that with this patch, you fetch all the
tuples and then you sort them, which is exactly the same thing you do
without the patch. The way it happens without the patch just seems to be
slower. Time to break out the profiler..
I downloaded what I believe to be the same DBLP titles dataset that you
tested with. Without the patch:
postgres=# explain analyze select * from dblp_titles where tsvector @@
to_tsquery('english', 'statistics') order by ts_rank(tsvector,
to_tsquery('english', 'statistics')) limit 10;
QUERY
PLAN
---------------------------------------------------------------------------------
---------------------------------------------------------
Limit (cost=42935.81..42935.84 rows=10 width=64) (actual
time=57.945..57.948 ro
ws=10 loops=1)
-> Sort (cost=42935.81..42980.86 rows=18020 width=64) (actual
time=57.943..5
7.944 rows=10 loops=1)
Sort Key: (ts_rank(tsvector, '''statist'''::tsquery))
Sort Method: top-N heapsort Memory: 28kB
-> Bitmap Heap Scan on dblp_titles (cost=211.66..42546.41
rows=18020 w
idth=64) (actual time=13.061..46.358 rows=15142 loops=1)
Recheck Cond: (tsvector @@ '''statist'''::tsquery)
-> Bitmap Index Scan on gin_title (cost=0.00..207.15
rows=18020
width=0) (actual time=8.339..8.339 rows=15142 loops=1)
Index Cond: (tsvector @@ '''statist'''::tsquery)
Total runtime: 57.999 ms
(9 rows)
And the profile looks like this:
6,94% postgres tbm_iterate
6,12% postgres hash_search_with_hash_value
4,40% postgres tbm_comparator
3,79% libc-2.17.so __memcpy_ssse3_back
3,68% postgres heap_hot_search_buffer
2,62% postgres slot_deform_tuple
2,47% postgres nocachegetattr
2,37% postgres heap_page_prune_opt
2,27% libc-2.17.so __memcmp_sse4_1
2,21% postgres heap_fill_tuple
2,18% postgres pg_qsort
1,96% postgres tas
1,88% postgres palloc0
1,83% postgres calc_rank_or
Drilling into that, tbm_iterate, tbm_comparator and pq_sort calls come
from the Tidbitmap code, as well as about 1/3 of the
hash_search_with_hash_value calls. There seems to be a fair amount of
overhead in building and iterating the tid bitmap. Is that what's
killing us?
For comparison, this is the same with your patch:
postgres=# explain analyze select * from dblp_titles where tsvector @@
to_tsquery('english', 'statistics') order by tsvector ><
to_tsquery('english', 'statistics') limit 10;
QUERY
PLAN
---------------------------------------------------------------------------------
--------------------------------------------------------
Limit (cost=16.00..52.81 rows=10 width=136) (actual time=9.957..9.980
rows=10 l
oops=1)
-> Index Scan using gin_title on dblp_titles (cost=16.00..52198.94
rows=1417
5 width=136) (actual time=9.955..9.977 rows=10 loops=1)
Index Cond: (tsvector @@ '''statist'''::tsquery)
Order By: (tsvector >< '''statist'''::tsquery)
Total runtime: 10.084 ms
(5 rows)
9,57% postgres scanGetItemFast
7,02% postgres calc_rank_or
5,71% postgres FunctionCall10Coll
5,59% postgres entryGetNextItem
5,19% postgres keyGetOrdering
5,13% postgres ginDataPageLeafReadItemPointer
4,89% postgres entryShift
4,85% postgres ginCompareItemPointers
3,44% postgres ginDataPageLeafRead
3,28% postgres AllocSetAlloc
3,27% postgres insertScanItem
3,18% postgres gin_tsquery_distance
2,38% postgres puttuple_common
2,26% postgres checkcondition_gin
2,20% postgres cmpEntries
2,17% postgres AllocSetFreeIndex
2,11% postgres calc_rank
Unsurprisingly, the tidbitmap overhead is not visible in the profile
with your patch.
To see how much of the difference is caused by the tidbitmap overhead, I
wrote the attached quick & dirty patch (it will crash and burn with
queries that require tidbitmap unions or intersects etc.). When there
are fewer than 10 items on a page, the tidbitmap keeps the offsets of
those items in an ordered array of offsets, instead of setting the bits
in the bitmap. In the test query, most pages have only 1, maybe 2 hits,
so that's a lot cheaper. Here's what the results look like with that patch:
postgres=# explain analyze select * from dblp_titles where tsvector @@
to_tsquery('english', 'statistics') order by ts_rank(tsvector,
to_tsquery('english', 'statistics')) limit 10;
QUERY
PLAN
---------------------------------------------------------------------------------
-------------------------------------------------------
Limit (cost=36396.80..36396.82 rows=10 width=136) (actual
time=19.532..19.532 r
ows=0 loops=1)
-> Sort (cost=36396.80..36432.23 rows=14175 width=136) (actual
time=19.531..
19.531 rows=0 loops=1)
Sort Key: (ts_rank(tsvector, '''statist'''::tsquery))
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on dblp_titles (cost=169.86..36090.48
rows=14175 w
idth=136) (actual time=19.525..19.525 rows=0 loops=1)
Recheck Cond: (tsvector @@ '''statist'''::tsquery)
-> Bitmap Index Scan on gin_title (cost=0.00..166.31
rows=14175
width=0) (actual time=8.310..8.310 rows=15142 loops=1)
Index Cond: (tsvector @@ '''statist'''::tsquery)
Total runtime: 19.583 ms
(9 rows)
So, that's about 3x faster than with no patch, but your patch is still
2x faster than this. However, I believe there's still room for
improvement with this approach. The profile with this quick & dirty
patch looks like this:
13,01% postgres hash_search_with_hash_value
11,03% postgres tbm_comparator
7,10% postgres heap_page_prune_opt
6,60% postgres pg_qsort
4,47% postgres expand_table
4,06% postgres scanGetItemFast
3,82% postgres tas
3,40% postgres tbm_iterate
2,70% postgres PinBuffer
2,30% postgres hash_any
2,16% postgres hash_seq_search
2,07% postgres tbm_get_pageentry
1,86% postgres calc_bucket
So, a lot of time is still spent in tidbitmap. The overhead of
tbm_iterate is gone, but much overhead remains elsewhere. The crux of
the overhead is that when you have only 1-2 hits per page, the tidbitmap
is really expensive, as it allocates an PagetableEntry struct for each
page. I believe it would be much cheaper to just accumulate the
ItemPointers into a simple array in tbm_add_tuples, sort it once, and
return results.
In summary: The test case you presented as motivation for this patch is
a bit of a worst-case scenario for the current tidbitmap implementation.
The speedup from your patch comes from avoiding the tidbitmap. However,
it would be fairly easy to optimize the tidbitmap to handle this
scenario better, which would benefit all kinds of queries that use
bitmap scans. There is really no reason to complicate the GIN API for
this. Let's just optimize tidbitmap.
I'm not sure if I fullly understand your patch, though. Is there some
other test scenario where it performs significantly better, which can
not be attributed to a tidbitmap overhead? I'm assuming 'no' for now,
and marking this patch as rejected in the commitfest app, but feel free
to reopen if there is.
- Heikki
Attachment | Content-Type | Size |
---|---|---|
tidbitmap-hack-1.patch | text/x-diff | 4.6 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-06-25 15:39:40 | Re: Fix pgstattuple/pgstatindex to use regclass-type as the argument |
Previous Message | Cédric Villemain | 2013-06-25 15:29:42 | Re: Bugfix and new feature for PGXS |