From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speed up Clog Access by increasing CLOG buffers |
Date: | 2015-09-12 03:01:51 |
Message-ID: | CAA4eK1JxL0zfqNxX=a-bRyNbCfXeL9Pq8v5oeoPb8Z_u2sjL+Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Sep 11, 2015 at 9:21 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > > Could you perhaps try to create a testcase where xids are accessed
that
> > > are so far apart on average that they're unlikely to be in memory? And
> > > then test that across a number of client counts?
> > >
> >
> > Now about the test, create a table with large number of rows (say
11617457,
> > I have tried to create larger, but it was taking too much time (more
than a day))
> > and have each row with different transaction id. Now each transaction
should
> > update rows that are at least 1048576 (number of transactions whose
status can
> > be held in 32 CLog buffers) distance apart, that way ideally for each
update it will
> > try to access Clog page that is not in-memory, however as the value to
update
> > is getting selected randomly and that leads to every 100th access as
disk access.
>
> What about just running a regular pgbench test, but hacking the
> XID-assignment code so that we increment the XID counter by 100 each
> time instead of 1?
>
If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier
of 10000) transaction would go for disk access.
The number 1048576 is derived by below calc:
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)
Transaction difference required for each transaction to go for disk access:
CLOG_XACTS_PER_PAGE * num_clog_buffers.
I think reducing to every 100th access for transaction status as disk access
is sufficient to prove that there is no regression with the patch for the
screnario
asked by Andres or do you think it is not?
Now another possibility here could be that we try by commenting out fsync
in CLOG path to see how much it impact the performance of this test and
then for pgbench test. I am not sure there will be any impact because even
every 100th transaction goes to disk access that is still less as compare
WAL fsync which we have to perform for each transaction.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Noah Misch | 2015-09-12 03:16:19 | Re: Autonomous Transaction is back |
Previous Message | Takashi Horikawa | 2015-09-12 02:49:30 | Re: Partitioned checkpointing |