From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
Cc: | Beena Emerson <memissemerson(at)gmail(dot)com>, Sameer Thakur <samthakur74(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Priority table or Cache table |
Date: | 2015-08-10 05:09:38 |
Message-ID: | CAA4eK1L3HkZ8-M=ksVYTc98A4tOi0=Tg-HW_6bHj9EJo81f_6A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Aug 6, 2015 at 12:24 PM, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
wrote:
>
> On Mon, Jun 30, 2014 at 11:08 PM, Beena Emerson <memissemerson(at)gmail(dot)com>
wrote:
> >
> > I also ran the test script after making the same configuration changes
that
> > you have specified. I found that I was not able to get the same
performance
> > difference that you have reported.
> >
> > Following table lists the tps in each scenario and the % increase in
> > performance.
> >
> > Threads Head Patched Diff
> > 1 1669 1718 3%
> > 2 2844 3195 12%
> > 4 3909 4915 26%
> > 8 7332 8329 14%
> >
>
>
> coming back to this old thread.
>
> I just tried a new approach for this priority table, instead of a
> entirely separate buffer pool,
> Just try to use a some portion of shared buffers to priority tables
> using some GUC variable
> "buffer_cache_ratio"(0-75) to specify what percentage of shared
> buffers to be used.
>
> Syntax:
>
> create table tbl(f1 int) with(buffer_cache=true);
>
> Comparing earlier approach, I though of this approach is easier to
implement.
> But during the performance run, it didn't showed much improvement in
> performance.
> Here are the test results.
>
What is the configuration for test (RAM of m/c, shared_buffers,
scale_factor, etc.)?
> Threads Head Patched Diff
> 1 3123 3238 3.68%
> 2 5997 6261 4.40%
> 4 11102 11407 2.75%
>
> I am suspecting that, this may because of buffer locks that are
> causing the problem.
> where as in older approach of different buffer pools, each buffer pool
> have it's own locks.
> I will try to collect the profile output and analyze the same.
>
> Any better ideas?
>
I think you should try to find out during test, for how many many pages,
it needs to perform clocksweep (add some new counter like
numBufferBackendClocksweep in BufferStrategyControl to find out the
same). By theory your patch should reduce the number of times it needs
to perform clock sweep.
I think in this approach even if you make some buffers as non-replaceable
(buffers for which BM_BUFFER_CACHE_PAGE is set), still clock sweep
needs to access all the buffers. I think we might want to find some way to
reduce that if this idea helps.
Another thing is that, this idea looks somewhat similar (although not same)
to current Ring Buffer concept, where Buffers for particular types of scan
uses buffers from Ring. I think it is okay to prototype as you have done
in patch and we can consider to do something on those lines if at all
this patch's idea helps.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2015-08-10 06:02:17 | Re: Test code is worth the space |
Previous Message | Rahila Syed | 2015-08-10 04:36:08 | Re: [PROPOSAL] VACUUM Progress Checker. |