From: | Rahila Syed <rahilasyed90(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>, Rahila Syed <rahilasyed(dot)90(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> |
Subject: | Re: [REVIEW] Re: Compression of full-page-writes |
Date: | 2014-08-19 18:36:45 |
Message-ID: | CAH2L28t-nn42mOyAoDujVLu3cd38AE56Ue830c-rbFqWd-2E9Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>So, it seems like you're basically using malloc to work around the
>fact that a palloc failure is an error, and we can't throw an error in
>a critical section. I don't think that's good; we want all of our
>allocations, as far as possible, to be tracked via palloc. It might
>be a good idea to add a new variant of palloc or MemoryContextAlloc
>that returns NULL on failure instead of throwing an error; I've wanted
>that once or twice. But in this particular case, I'm not quite seeing
>why it should be necessary
I am using malloc to return NULL in case of failure and proceed without
compression of FPW ,if it returns NULL.
Proceeding without compression seems to be more accurate than throwing an
error and exiting because of failure to allocate memory for compression.
>the number of backup blocks per record is
>limited to some pretty small number, so it ought to be possible to
>preallocate enough memory to compress them all, perhaps just by
>declaring a global variable like char wal_compression_space[8192]; or
>whatever.
In the updated patch a static global variable is added to which memory is
allocated from heap using malloc outside critical section. The size of the
memory block is 4 * BkpBlock header + 4 * BLCKSZ.
Thank you,
On Mon, Aug 18, 2014 at 10:40 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Jul 3, 2014 at 3:58 PM, Rahila Syed <rahilasyed90(at)gmail(dot)com>
> wrote:
> > Updated version of patches are attached.
> > Changes are as follows
> > 1. Improved readability of the code as per the review comments.
> > 2. Addition of block_compression field in BkpBlock structure to store
> > information about compression of block. This provides for switching
> > compression on/off and changing compression algorithm as required.
> > 3.Handling of OOM in critical section by checking for return value of
> malloc
> > and proceeding without compression of FPW if return value is NULL.
>
> So, it seems like you're basically using malloc to work around the
> fact that a palloc failure is an error, and we can't throw an error in
> a critical section. I don't think that's good; we want all of our
> allocations, as far as possible, to be tracked via palloc. It might
> be a good idea to add a new variant of palloc or MemoryContextAlloc
> that returns NULL on failure instead of throwing an error; I've wanted
> that once or twice. But in this particular case, I'm not quite seeing
> why it should be necessary - the number of backup blocks per record is
> limited to some pretty small number, so it ought to be possible to
> preallocate enough memory to compress them all, perhaps just by
> declaring a global variable like char wal_compression_space[8192]; or
> whatever.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2014-08-19 18:37:55 | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Previous Message | Robert Haas | 2014-08-19 17:06:19 | Re: replication commands and log_statements |