From: | rudi <rudolone(at)gmail(dot)com> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: High CPU usage after partitioning |
Date: | 2013-01-22 13:34:55 |
Message-ID: | CAMxPiKFW1VaMXo7HoH6fd3deQ1LJ4wU3DEAWG0sYjyg1nRq-_Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Jan 22, 2013 at 1:41 AM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
>
> On 01/21/2013 10:05 AM, rudi wrote:
>
>> Hello,
>>
>> I'm running postgresl 9.0. After partitioning a big table, CPU usage
>> raised from average 5-10% to average 70-80%.
>>
>> - the table contains about 20.000.000 rows
>> - partitions are selected using a trigger, based on an indexed field, a
>> date (IF date_taken >= x AND date_taken < y)
>> - I created 5 partitions, the 2012 one now contains most of the rows. The
>> 2013 partition is the "live" partition, mostly insert, a few select based
>> on the above indexed field. The 2013, 2014, 2015 partitions are empty
>> - constraint execution is on.
>> I have 2 weeks CPU usage reports and the pattern definately changed after
>> I made the partitions. Any idea?
>>
>>
>>
> Well, the first question that comes to my mind is whether it's the inserts
> that are causing the load or the reads. If it's the inserts then you should
> show us the whole trigger. Does it by any chance use 'execute'?
I think I found the culprit. The insert trigger doesn't seem to be an
issue. It is a trivial IF-ELSE and inserts seems fast.
IF (NEW.date_taken < DATE '2013-01-01') THEN
INSERT INTO sb_logs_2012 VALUES (NEW.*);
ELSIF (NEW.date_taken >= DATE '2013-01-01' AND NEW.date_taken < DATE
'2014-01-01') THEN
INSERT INTO sb_logs_2013 VALUES (NEW.*);
[...]
END IF;
Every query has been carefully optimized, child tables are indexed. The
table(s) has a UNIQUE index on ("date_taken", "device_id") and "date_taken"
is the partitioning column (one partition per year).
There are few well known access path to this table: INSERTs (40-50.000 each
day), SELECTs on a specific device_id AND on a specific day.
BUT, I discovered an access path used by a process every few secs. to get
the last log for a given device, and this query became really slow after
partitioning:
Result (cost=341156.04..341182.90 rows=4 width=86) (actual
time=1132.326..1132.329 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Aggregate (cost=341156.03..341156.04 rows=1 width=8) (actual
time=1132.295..1132.296 rows=1 loops=1)
-> Append (cost=0.00..341112.60 rows=17371 width=8) (actual
time=45.600..1110.057 rows=19016 loops=1)
-> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8)
(actual time=0.000..0.000 rows=0 loops=1)
Filter: (device_id = 901)
-> Index Scan using
sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs
(cost=0.00..319430.51 rows=16003 width=8) (actual time=45.599..1060.143
rows=17817 loops=1)
Index Cond: (device_id = 901)
-> Index Scan using
sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs
(cost=0.00..21663.39 rows=1363 width=8) (actual time=0.022..47.661
rows=1199 loops=1)
Index Cond: (device_id = 901)
-> Bitmap Heap Scan on sb_logs_2014 sb_logs
(cost=10.25..18.71 rows=4 width=8) (actual time=0.011..0.011 rows=0
loops=1)
Recheck Cond: (device_id = 901)
-> Bitmap Index Scan on
sb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0)
(actual time=0.008..0.008 rows=0 loops=1)
Index Cond: (device_id = 901)
-> Append (cost=0.00..26.86 rows=4 width=86) (actual
time=1132.325..1132.328 rows=1 loops=1)
-> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual
time=0.002..0.002 rows=0 loops=1)
Filter: ((device_id = 901) AND (date_taken = $0))
-> Index Scan using sb_logs_2012_on_date_taken_and_device_id on
sb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual
time=1132.314..1132.314 rows=0 loops=1)
Index Cond: ((date_taken = $0) AND (device_id = 901))
-> Index Scan using sb_logs_2013_on_date_taken_and_device_id on
sb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual
time=0.007..0.008 rows=1 loops=1)
Index Cond: ((date_taken = $0) AND (device_id = 901))
-> Index Scan using sb_logs_2014_on_date_taken_and_device_id on
sb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual
time=0.002..0.002 rows=0 loops=1)
Index Cond: ((date_taken = $0) AND (device_id = 901))
Total runtime: 1132.436 ms
I must find a better way to get that information, but I wonder if there
could be a better plan. The same query over a table with the same structure
but not partitioned gives far better plan:
Index Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs
(cost=12.35..21.88 rows=1 width=157) (actual time=0.065..0.066 rows=1
loops=1)
Index Cond: ((date_taken = $1) AND (device_id = 1475))
InitPlan 2 (returns $1)
-> Result (cost=12.34..12.35 rows=1 width=0) (actual
time=0.059..0.059 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.00..12.34 rows=1 width=8) (actual
time=0.055..0.056 rows=1 loops=1)
-> Index Scan Backward using
index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261052.53
rows=21154 width=8) (actual time=0.055..0.055 rows=1 loops=1)
Index Cond: ((date_taken IS NOT NULL) AND
(device_id = 1475))
Total runtime: 0.110 ms
--
rd
This is the way the world ends.
Not with a bang, but a whimper.
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2013-01-22 14:04:39 | Re: High CPU usage after partitioning |
Previous Message | Andrew Dunstan | 2013-01-22 00:41:29 | Re: High CPU usage after partitioning |