From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speed up Clog Access by increasing CLOG buffers |
Date: | 2015-09-11 15:51:36 |
Message-ID: | CA+TgmoYE4kj=fRNwPPL6+Qm-oD-JYX+RnxFjVaGGgOjT1aj70Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > Could you perhaps try to create a testcase where xids are accessed that
> > are so far apart on average that they're unlikely to be in memory? And
> > then test that across a number of client counts?
> >
>
> Now about the test, create a table with large number of rows (say 11617457,
> I have tried to create larger, but it was taking too much time (more than a day))
> and have each row with different transaction id. Now each transaction should
> update rows that are at least 1048576 (number of transactions whose status can
> be held in 32 CLog buffers) distance apart, that way ideally for each update it will
> try to access Clog page that is not in-memory, however as the value to update
> is getting selected randomly and that leads to every 100th access as disk access.
What about just running a regular pgbench test, but hacking the
XID-assignment code so that we increment the XID counter by 100 each
time instead of 1?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2015-09-11 15:54:46 | Re: Partitioned checkpointing |
Previous Message | Andres Freund | 2015-09-11 15:47:02 | Re: 9.3.9 and pg_multixact corruption |