From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Simulating Clog Contention |
Date: | 2012-01-27 21:45:16 |
Message-ID: | CAMkU=1z9H-1shJpxeuuf12tZSmu2USsAb587DikVsVW1=GK_Og@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jan 12, 2012 at 4:31 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> The following patch adds a pgbench option -I to load data using
> INSERTs, so that we can begin benchmark testing with rows that have
> large numbers of distinct un-hinted transaction ids. With a database
> pre-created using this we will be better able to simulate and thus
> more easily measure clog contention. Note that current clog has space
> for 1 million xids, so a scale factor of greater than 10 is required
> to really stress the clog.
Running with this patch with a non-default scale factor generates the
spurious notice:
"Scale option ignored, using pgbench_branches table count = 10"
In fact the scale option is not being ignored, because it was used to
initialize the pgbench_branches table count earlier in this same
invocation.
I think that even in normal (non-initialization) usage, this message
should be suppressed when the provided scale factor
is equal to the pgbench_branches table count.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2012-01-27 22:05:41 | Re: CLOG contention, part 2 |
Previous Message | Dan Scales | 2012-01-27 21:07:00 | Re: 16-bit page checksums for 9.2 |