From: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
---|---|
To: | Dmitry Dolgov <9erthalion6(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Pluggable Storage - Andres's take |
Date: | 2018-11-02 00:17:29 |
Message-ID: | CAJrrPGc5xE_sJbGNZJ7h8d5kWwS-0SspUKa2Vmi+nwEaBUYQFw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 31, 2018 at 9:34 PM Dmitry Dolgov <9erthalion6(at)gmail(dot)com> wrote:
> > On Mon, 29 Oct 2018 at 05:56, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
> wrote:
> >
> >> This problem couldn't be reproduced on the master branch, so I've tried
> to
> >> investigate it. It comes from nodeModifyTable.c:1267, when we've got
> >> HeapTupleInvisible as a result, and this value in turn comes from
> >> table_lock_tuple. Everything points to the new way of handling
> HeapTupleUpdated
> >> result from heap_update, when table_lock_tuple call was introduced.
> Since I
> >> don't see anything similar in the master branch, can anyone clarify why
> is this
> >> lock necessary here?
> >
> >
> > In the master branch code also, there is a tuple lock that is happening
> in
> > EvalPlanQual() function, but pluggable-storage code, the lock is kept
> outside
> > and also function call rearrangements, to make it easier for the table
> access
> > methods to provide their own MVCC implementation.
>
> Yes, now I see it, thanks. Also I can confirm that the attached patch
> solves
> this issue.
>
Thanks for the testing and confirmation.
> FYI, alongside with reviewing the code changes I've ran few performance
> tests
> (that's why I hit this issue with pgbench in the first place). In case of
> high
> concurrecy so far I see small performance degradation in comparison with
> the
> master branch (about 2-5% of average latency, depending on the level of
> concurrency), but can't really say why exactly (perf just shows barely
> noticeable overhead there and there, maybe what I see is actually a
> cumulative
> impact).
>
Thanks for sharing your observation, I will also analyze and try to find
out performance
bottlenecks that are causing the overhead.
Here I attached the cumulative fixes of the patches, new API additions for
zheap and
basic outline of the documentation.
Regards,
Haribabu Kommi
Fujitsu Australia
Attachment | Content-Type | Size |
---|---|---|
0003-First-draft-of-pluggable-storage-documentation.patch | application/octet-stream | 31.5 KB |
0002-New-API-s-are-added.patch | application/octet-stream | 9.0 KB |
0001-Further-fixes-and-cleanup.patch | application/octet-stream | 13.6 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2018-11-02 00:27:39 | Re: pg_promote not marked as parallel-restricted in pg_proc.dat |
Previous Message | Michael Paquier | 2018-11-02 00:02:02 | Re: INSTALL file |