From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: autovacuum can't keep up, bloat just continues to rise |
Date: | 2017-07-21 03:58:12 |
Message-ID: | 67f31b47-9886-b59a-17b7-1cdbbe8975ac@commandprompt.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 07/19/2017 07:57 PM, Tom Lane wrote:
> Peter Geoghegan <pg(at)bowt(dot)ie> writes:
>> My argument for the importance of index bloat to the more general
>> bloat problem is simple: any bloat that accumulates, that cannot be
>> cleaned up, will probably accumulate until it impacts performance
>> quite noticeably.
>
> But that just begs the question: *does* it accumulate indefinitely, or
> does it eventually reach a more-or-less steady state? The traditional
> wisdom about btrees, for instance, is that no matter how full you pack
> them to start with, the steady state is going to involve something like
> 1/3rd free space. You can call that bloat if you want, but it's not
> likely that you'll be able to reduce the number significantly without
> paying exorbitant costs.
>
> I'm not claiming that we don't have any problems, but I do think it's
> important to draw a distinction between bloat and normal operating
> overhead.
Agreed but we aren't talking about 30% I don't think. Here is where I am
at. It took until 30 minutes ago for the tests to finish:
name | setting
-------------------------------------+-----------
autovacuum | on
autovacuum_analyze_scale_factor | 0.1
autovacuum_analyze_threshold | 50
autovacuum_freeze_max_age | 200000000
autovacuum_max_workers | 3
autovacuum_multixact_freeze_max_age | 400000000
autovacuum_naptime | 60
autovacuum_vacuum_cost_delay | 20
autovacuum_vacuum_cost_limit | -1
autovacuum_vacuum_scale_factor | 0.2
autovacuum_vacuum_threshold | 50
autovacuum_work_mem | -1
log_autovacuum_min_duration | -1
Test 1: 55G /srv/main
TPS: 955
Test 2: 112G /srv/main
TPS: 531 (Not sure what happened here, long checkpoint?)
Test 3: 109G /srv/main
TPS: 868
Test 4: 143G
TPS: 840
Test 5: 154G
TPS: 722
I am running the query here:
https://wiki.postgresql.org/wiki/Index_Maintenance#Summarize_keyspace_of_a_B-Tree_index
And will post a followup. Once the query finishes I am going to launch
the tests with autovacuum_vacuum_cost_limit of 5000. Is there anything
else you folks would like me to change?
JD
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
***** Unless otherwise stated, opinions are my own. *****
From | Date | Subject | |
---|---|---|---|
Next Message | Neha Sharma | 2017-07-21 04:16:52 | Re: [TRAP: FailedAssertion] causing server to crash |
Previous Message | Craig Ringer | 2017-07-21 03:29:14 | Re: [PATCH] pageinspect function to decode infomasks |