From: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> |
---|---|
To: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
Cc: | Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PgHacker <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Subject: | Re: contrib/cache_scan (Re: What's needed for cache-only table scan?) |
Date: | 2014-02-26 09:23:38 |
Message-ID: | 9A28C8860F777E439AA12E8AEA7694F8F7FC3D@BPXM15GP.gisp.nec.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Thanks for the information, I will apply other patches also and
> start testing.
>
>
> When try to run the pgbench test, by default the cache-scan plan is not
> chosen because of more cost. So I increased the cpu_index_tuple_cost to
> a maximum value or by turning off index_scan, so that the plan can chose
> the cache_scan as the least cost.
>
It's expected. In case of index-scan is available, its cost is obviously
cheaper than cache-scan, even if it does not issue disk-i/o.
> The configuration parameters changed during the test are,
>
> shared_buffers - 2GB, cache_scan.num_blocks - 1024 wal_buffers - 16MB,
> checkpoint_segments - 255 checkpoint_timeout - 15 min,
> cpu_index_tuple_cost - 100000 or enable_indexscan=off
>
> Test procedure:
> 1. Initialize the database with pgbench with 75 scale factor.
> 2. Create the triggers on pgbench_accounts 3. Use a select query to load
> all the data into cache.
> 4. Run a simple update pgbench test.
>
> Plan details of pgbench simple update queries:
>
> postgres=# explain update pgbench_accounts set abalance = abalance where
> aid = 100000;
> QUERY PLAN
> ----------------------------------------------------------------------
> -------------------------------------
> Update on pgbench_accounts (cost=0.43..100008.44 rows=1 width=103)
> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts
> (cost=0.43..100008.44 rows=1 width=103)
> Index Cond: (aid = 100000)
> Planning time: 0.045 ms
> (4 rows)
>
> postgres=# explain select abalance from pgbench_accounts where aid =
> 100000;
> QUERY PLAN
> ----------------------------------------------------------------------
> --------------
> Custom Scan (cache scan) on pgbench_accounts (cost=0.00..99899.99
> rows=1 width=4)
> Filter: (aid = 100000)
> Planning time: 0.042 ms
> (3 rows)
>
> I am observing a too much delay in performance results. The performance
> test script is attached in the mail.
>
I want you to compare two different cases between sequential scan but
part of buffers have to be loaded from storage and cache-only scan.
It probably takes a difference.
Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei <kaigai(at)ak(dot)jp(dot)nec(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | Christian Kruse | 2014-02-26 10:38:35 | Re: [PATCH] Use MAP_HUGETLB where supported (v3) |
Previous Message | Shigeru Hanada | 2014-02-26 09:17:46 | Re: Custom Scan APIs (Re: Custom Plan node) |