From: | Prabhjot Sheena <prabhjot(dot)sheena(at)rivalwatch(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Checkpoint_segments optimal value |
Date: | 2014-07-26 00:58:43 |
Message-ID: | CAOf_bQaAx1uRH+GAMReYwOGWgOLgZJUyPabUS2V5-e_CB13Y=g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
Thanks a lot Kevin. This is what i did to improve query performance. i
recreated all the indexes on work_unit table and have been running vacuum
analyze through cron job 3 times a day on two tables that are in the query.
The query performance is between 2 to 3 seconds now. The strange thing i
noticed is that just today at one time query performance came down to under
1 second and started using this query plan
http://explain.depesz.com/s/h5q8
But than cronjob started to vacuum analyze the work_unit and run table and
after vacuum analyze got completed. it started using another query plan
which made query slow and it went back to running between 2 to 3 seconds.
This is query plan that is used now and is slower
http://explain.depesz.com/s/AiG
Why is that happening and what should i do to put it back to this query
plan http://explain.depesz.com/s/h5q8
No changes have been made to the table structure or indexes between those
two query plans
My other questions is once the manual vaccum runs on the table like vacuum
analyze work_unit and vacuum analyze run does that also cleans up the
indexes or not. If not does that do any damages to the indexes or something
We do have i/o problem during heavy loads because we share mount point from
netapp across different machines which we are getting rid in few weeks by
moving stuff to Amazon aws and than IO issues will go away with fast iops
i have copied some more information. *Can you please recomend what values i
should set for temp_buffer and work_memory and also what query should i run
to check for bloat*.
caesius=# SELECT version();
version
----------------------------------------------------------------------------------------------------------
PostgreSQL 8.3.7 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
3.4.6 20060404 (Red Hat 3.4.6-9)
(1 row)
caesius=# show max_connections ;
max_connections
-----------------
600
caesius=# select count(*) from pg_stat_activity;
count
-------
165
#autovacuum = on
log_autovacuum_min_duration = 10000
autovacuum_max_workers = 1
#autovacuum_naptime = 1min
#autovacuum_vacuum_threshold = 50
#autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.1
#autovacuum_analyze_scale_factor = 0.05
#autovacuum_freeze_max_age = 200000000
#autovacuum_vacuum_cost_delay = 20
#autovacuum_vacuum_cost_limit = -1
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-1000 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
[caesius(at)clientdb01 tmp]$ cat /proc/meminfo
MemTotal: 12582912 kB
MemFree: 204748 kB
Buffers: 4540 kB
Cached: 9541024 kB
SwapCached: 5324 kB
Active: 5218556 kB
Inactive: 6554684 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 12582912 kB
LowFree: 204748 kB
SwapTotal: 16777208 kB
SwapFree: 16755516 kB
Dirty: 36584 kB
Writeback: 20 kB
AnonPages: 2227364 kB
Mapped: 1093452 kB
Slab: 101396 kB
PageTables: 206692 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 23068664 kB
Committed_AS: 3796932 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 9196 kB
VmallocChunk: 34359729063 kB
iostar -d -s 5
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 7.40 0.20 1.20 4.80 68.80 52.57
0.01 4.57 2.29 0.32
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 7.40 0.20 1.20 4.80 68.80 52.57
0.01 4.57 2.29 0.32
dm-0 0.00 0.00 0.20 8.60 4.80 68.80 8.36
0.04 4.09 0.36 0.32
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 3.80 0.00 0.60 0.00 35.20 58.67
0.00 6.67 5.33 0.32
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 3.80 0.00 0.60 0.00 35.20 58.67
0.00 6.67 5.33 0.32
dm-0 0.00 0.00 0.00 4.40 0.00 35.20 8.00
0.02 4.36 0.73 0.32
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 9.40 0.00 1.80 0.00 89.60 49.78
0.00 2.22 0.44 0.08
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 9.40 0.00 1.80 0.00 89.60 49.78
0.00 2.22 0.44 0.08
dm-0 0.00 0.00 0.00 11.20 0.00 89.60 8.00
0.03 3.00 0.07 0.08
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 12.38 5.79 13.17 106.99 204.39 16.42
0.20 10.78 1.85 3.51
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 12.38 5.79 13.17 106.99 204.39 16.42
0.20 10.78 1.85 3.51
dm-0 0.00 0.00 5.79 25.55 106.99 204.39 9.94
0.31 9.83 1.12 3.51
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 0.00 0.00 0.20 0.00 1.60 8.00
0.00 4.00 4.00 0.08
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2 0.00 0.00 0.00 0.20 0.00 1.60 8.00
0.00 4.00 4.00 0.08
dm-0 0.00 0.00 0.00 0.20 0.00 1.60 8.00
0.00 4.00 4.00 0.08
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
Thanks a lot
On Mon, Jul 21, 2014 at 2:04 PM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> On 7/21/2014 1:51 PM, Kevin Grittner wrote:
>
>> The above might help, but I think the biggest problem may be your
>> VM. You show very low disk I/O numbers, but a pretty significant
>> fraction of the time waiting for I/O. The previously-suggested
>> iostat output may help nail it down more specifically, but
>> basically you seem to have a big problem with bandwidth for storage
>> access. It's pretty disturbing to see lines in vmstat output which
>> show zero disk in or out, but over 10% of CPU time waiting for
>> storage?!?
>>
>
> that would suggest to me a large number of VMs sharing a single SATA
> drive, or similar.
>
>
>
>
> --
> john r pierce 37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2014-07-26 02:05:30 | Re: Checkpoint_segments optimal value |
Previous Message | Rural Hunter | 2014-07-25 14:23:49 | Re: Very slow planning performance on partition table |
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2014-07-26 02:05:30 | Re: Checkpoint_segments optimal value |
Previous Message | Marc Mamin | 2014-07-25 19:53:13 | Re: copy/dump database to text/csv files |