From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: POC: Cleaning up orphaned files using undo logs |
Date: | 2019-08-16 04:14:25 |
Message-ID: | CAFiTN-ubmAzjdm7=kp6agjsEFNaiLj4ZNEcTisbu4xvGRuC1aQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Aug 14, 2019 at 2:48 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>
> On Wed, Aug 14, 2019 at 12:27 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> > I think that batch reading should just copy the underlying data into a
> > char* buffer. Only the records that currently are being used by
> > higher layers should get exploded into an unpacked record. That will
> > reduce memory usage quite noticably (and I suspect it also drastically
> > reduce the overhead due to a large context with a lot of small
> > allocations that then get individually freed).
>
> Ok, I got your idea. I will analyze it further and work on this if
> there is no problem.
I think there is one problem that currently while unpacking the undo
record if the record is compressed (i.e. some of the fields does not
exist in the record) then we read those fields from the first record
on the page. But, if we just memcpy the undo pages to the buffers and
delay the unpacking whenever it's needed seems that we would need to
know the page boundary and also we need to know the offset of the
first complete record on the page from where we can get that
information (which is currently in undo page header).
As of now even if we leave this issue apart I am not very clear what
benefit you are seeing in the way you are describing compared to the
way I am doing it now?
a) Is it the multiple palloc? If so then we can allocate memory at
once and flatten the undo records in that. Earlier, I was doing that
but we need to align each unpacked undo record so that we can access
them directly and based on Robert's suggestion I have modified it to
multiple palloc.
b) Is it the memory size problem that the unpack undo record will take
more memory compared to the packed record?
c) Do you think that we will not need to unpack all the records? But,
I think eventually, at the higher level we will have to unpack all the
undo records ( I understand that it will be one at a time)
Or am I completely missing something here?
>
> That will make the
> > sorting of undo a bit more CPU inefficient, because individual records
> > will need to be partially unpacked for comparison, but I think that's
> > going to be a far smaller loss than the win.
> Right.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2019-08-16 04:32:13 | Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS) |
Previous Message | Dilip Kumar | 2019-08-16 03:24:20 | Re: POC: Cleaning up orphaned files using undo logs |