From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Lucas Lersch <lucaslersch(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Buffer Requests Trace |
Date: | 2014-10-15 17:41:46 |
Message-ID: | CAMkU=1z5n6FZVcb_7bHAU8QXAmsE=-Vv8x3CAOkVN3XrjrSRuw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 15, 2014 at 6:22 AM, Lucas Lersch <lucaslersch(at)gmail(dot)com> wrote:
> So is it a possible normal behavior that running tpcc for 10min only
> access 50% of the database? Furthermore, is there a guideline of parameters
> for tpcc (# of warehouses, execution time, operations weight)?
>
>
I'm not familiar with your benchmarking tool. With the one I am most
familiar with, pgbench, if you run it against a database which is too big
to fit in memory, it can take a very long time to touch each page once,
because the constant random disk reads makes it run very slowly. Maybe
that is something to consider here--how many transactions were actually
executed during your 10 min run?
Also, the tool might build tables that are only used under certain run
options. Perhaps you just aren't choosing the options which invoke usage
of those tables. Since you have the trace data, it should be pretty easy
to count how many distinct blocks are accessed from each relation, and
compare that to the size of the relations to see which relations are unused
or lightly used.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2014-10-15 18:02:30 | Re: Yet another abort-early plan disaster on 9.3 |
Previous Message | Robert Haas | 2014-10-15 17:41:25 | Re: WIP: dynahash replacement for buffer table |