From: | dennis jenkins <dennis(dot)jenkins(dot)75(at)gmail(dot)com> |
---|---|
To: | Andy Colson <andy(at)squeakycode(dot)net> |
Cc: | Stefan Keller <sfkeller(at)gmail(dot)com>, pgsql-general List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Call for Google Summer of Code (GSoC) 2012: Project ideas? |
Date: | 2012-03-08 20:11:43 |
Message-ID: | CAAEzAp80mROGLXOt81DM5LXJrG54bKKx25FdPN8NOKaB6xG9XQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>> Now I have the "burden" to look for a cool project... Any ideas?
>>
>> -Stefan
>>
>
> How about one of:
>
> 1) on disk page level compression (maybe with LZF or snappy) (maybe not page
> level, any level really)
>
> I know toast compresses, but I believe its only one row. page level would
> compress better because there is more data, and it would also decrease the
> amount of IO, so it might speed up disk access.
>
> 2) better partitioning support. Something much more automatic.
>
> 3) take a nice big table, have it inserted/updated a few times a second.
> Then make "select * from bigtable where indexed_field = 'somevalue'; work
> 10 times faster than it does today.
>
>
> I think there is also a wish list on the wiki somewhere.
>
> -Andy
>
Ability to dynamically resize the shared-memory segment without taking
postgresql down :)
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2012-03-08 20:37:05 | Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it? |
Previous Message | Aleksey Tsalolikhin | 2012-03-08 20:10:16 | Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it? |