From: | desmodemone <desmodemone(at)gmail(dot)com> |
---|---|
To: | knizhnik <knizhnik(at)garret(dot)ru> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>, Oleg Bartunov <obartunov(at)gmail(dot)com> |
Subject: | Re: In-Memory Columnar Store |
Date: | 2013-12-12 00:06:44 |
Message-ID: | CAEs9oFn920CSw_0k+TTa79cdF6zQC+TdHz3xwvbdXmXa_iEMZQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
2013/12/9 knizhnik <knizhnik(at)garret(dot)ru>
> Hello!
>
> I want to annouce my implementation of In-Memory Columnar Store extension
> for PostgreSQL:
>
> Documentation: http://www.garret.ru/imcs/user_guide.html
> Sources: http://www.garret.ru/imcs-1.01.tar.gz
>
> Any feedbacks, bug reports and suggestions are welcome.
>
> Vertical representation of data is stored in PostgreSQL shared memory.
> This is why it is important to be able to utilize all available physical
> memory.
> Now servers with Tb or more RAM are not something exotic, especially in
> financial world.
> But there is limitation in Linux with standard 4kb pages for maximal size
> of mapped memory segment: 256Gb.
> It is possible to overcome this limitation either by creating multiple
> segments - but it requires too much changes in PostgreSQL memory manager.
> Or just set MAP_HUGETLB flag (assuming that huge pages were allocated in
> the system).
>
> I found several messages related with MAP_HUGETLB flag, the most recent
> one was from 21 of November:
> http://www.postgresql.org/message-id/20131125032920.GA23793@toroid.org
>
> I wonder what is the current status of this patch?
>
>
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
Hello,
excellent work! I begin to do testing and it's very fast, by the
way I found a strange case of "endless" query with CPU a 100% when the
value used as filter does not exists:
I am testing with postgres 9.3.1 on debian and I used default value for
the extension except memory ( 512mb )
how to recreate the test case :
## create a table :
create table endless ( col1 int , col2 char(30) , col3 int ) ;
## insert some values:
insert into endless values ( 1, 'ahahahaha', 3);
insert into endless values ( 2, 'ghghghghg', 4);
## create the column store objects:
select cs_create('endless','col1','col2');
cs_create
-----------
(1 row)
## try and test column store :
select cs_avg(col3) from endless_get('ahahahaha');
cs_avg
--------
3
(1 row)
select cs_avg(col3) from endless_get('ghghghghg');
cs_avg
--------
4
(1 row)
## now select with a value that does not exist :
select cs_avg(col3) from endless_get('testing');
# and now start to loop on cpu and seems to never ends , I had to
terminate backend
Bye
Mat
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2013-12-12 00:08:39 | Re: logical changeset generation v6.8 |
Previous Message | Tom Lane | 2013-12-11 23:49:18 | Re: Reference to parent query from ANY sublink |