From: | Amit kapila <amit(dot)kapila(at)huawei(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [WIP PATCH] for Performance Improvement in Buffer Management |
Date: | 2012-10-22 17:51:33 |
Message-ID: | 6C0B27F7206C9E4CA54AE035729E9C3828542A28@szxeml509-mbx |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sunday, October 21, 2012 1:29 PM Amit kapila wrote:
On Saturday, October 20, 2012 11:03 PM Jeff Janes wrote:
On Fri, Sep 7, 2012 at 6:14 AM, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>>>> The results for the updated code is attached with this mail.
>>>> The scenario is same as in original mail.
>>>> 1. Load all the files in to OS buffers (using pg_prewarm with 'read' operation) of all tables and indexes.
>>>> 2. Try to load all buffers with "pgbench_accounts" table and "pgbench_accounts_pkey" pages (using pg_prewarm with 'buffers' operation).
>>>> 3. Run the pgbench with select only for 20 minutes.
>
>>>> Platform details:
>>>> Operating System: Suse-Linux 10.2 x86_64
>>>> Hardware : 4 core (Intel(R) Xeon(R) CPU L5408 @ 2.13GHz)
>>>> RAM : 24GB
>
>>>> Server Configuration:
>>>> shared_buffers = 5GB (1/4 th of RAM size)
>>>> Total data size = 16GB
>>>> Pgbench configuration:
>>>> transaction type: SELECT only
>>>> scaling factor: 1200
>>>> query mode: simple
>>>> number of clients: <varying from 8 to 64 >
>>>> number of threads: <varying from 8 to 64 >
>>>> duration: 1200 s
>
>>>> I shall take further readings for following configurations and post the same:
>>>> 1. The intention for taking with below configuration is that, with the defined testcase, there will be some cases where I/O can happen. So I wanted to check the
>>>> impact of it.
>
>>>> Shared_buffers - 7 GB
>>>> number of clients: <varying from 8 to 64 >
>>>> number of threads: <varying from 8 to 64 >
>>>> transaction type: SELECT only
>
>>> The data for shared_buffers = 7GB is attached with this mail. I have also attached scripts used to take this data.
>> Is this result reproducible? Did you monitor IO (with something like
>>vmstat) to make sure there was no IO going on during the runs?
> Yes, I have reproduced it 2 times. However I shall reproduce once more and use vmstat as well.
> I have not observed with vmstat but it is observable in the data.
> When I have kept shared buffers = 5G, the tps is more and when I increased it to 7G, the tps is reduced which shows there is some I/O started happening.
> When I increased to 10G, the tps reduced drastically which shows there is lot of I/O. Tommorow I will post 10G shared buffers data as well.
Today again I have again collected the data for configuration Shared_buffers = 7G along with vmstat.
The data and vmstat information (bi) are attached with this mail. It is observed from vmstat info that I/O is happening for both cases, however after running for
long time, the I/O is also comparatively less with new patch.
I have attached vmstat report for only one type of configuration, but I have data for others as well.
Please let me know if you want to have a look at that data as well.
With Regards,
Amit Kapila.
Attachment | Content-Type | Size |
---|---|---|
Perf_Results_SharedBuffers_7G.html | text/html | 41.3 KB |
vmstat_results_SharedBuffers_7GB.html | text/html | 784.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-10-22 17:54:35 | Re: Successor of MD5 authentication, let's use SCRAM |
Previous Message | Robert Haas | 2012-10-22 17:44:23 | Re: [v9.3] Row-Level Security |