From: | Hugh Ranalli <hugh(at)whtc(dot)ca> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Perplexing, regular decline in performance |
Date: | 2019-07-18 20:01:46 |
Message-ID: | CAAhbUMO3rt9QoGr8z14zfTGHqayhBGMQU=Jgsv1=sfq=PA=egQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 25 Jun 2019 at 12:23, Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> It's possible that the "administrative" queries are using up lots of your
> shared_buffers, which are (also/more) needed by the customer-facing
> queries. I
> would install pg_buffercache to investigate. Or, just pause the admin
> queries
> and see if that the issue goes away during that interval ?
>
> SELECT 1.0*COUNT(1)/sum(count(1))OVER(), COUNT(1),
> COUNT(nullif(isdirty,'f')), datname, COALESCE(c.relname,
> b.relfilenode::text), d.relname TOAST,
> 1.0*COUNT(nullif(isdirty,'f'))/count(1) dirtyfrac, avg(usagecount) FROM
> pg_buffercache b JOIN pg_database db ON b.reldatabase=db.oid LEFT JOIN
> pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class
> d ON c.oid=d.reltoastrelid GROUP BY 4,5,6 ORDER BY 1 DESC LIMIT 9;
>
I've been going by a couple of articles I found about interpreting
pg_buffercache (
https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers)
and so far shared buffers look okay. Our database is 486 GB, with shared
buffers set to 32 GB. The article suggests a query that can provide a
guideline for what shared buffers should be:
SELECT
pg_size_pretty(count(*) * 8192) as ideal_shared_buffers
FROM
pg_class c
INNER JOIN
pg_buffercache b ON b.relfilenode = c.relfilenode
INNER JOIN
pg_database d ON (b.reldatabase = d.oid AND d.datname =
current_database())
WHERE
usagecount >= 3;
This comes out to 25 GB, and even dropping the usage count to 1 only raises
it to 30 GB. I realise this is only a guideline, and I may bump it to 36
GB, to give a bit more space.
I did run some further queries to look at usage (based on the same
article), and most of the tables that have very high usage on all the
buffered data are 100% buffered, so, if I understand it correctly, there
should be little churn there. The others seem to have sufficient
less-accessed space to make room for data that they need to buffer:
relname | buffered | buffers_percent | percent_of_relation
-------------------------+----------+-----------------+---------------------
position | 8301 MB | 25.3 | 99.2
stat_position_click | 7359 MB | 22.5 | 76.5
url | 2309 MB | 7.0 | 100.0
pg_toast_19788 | 1954 MB | 6.0 | 49.3
(harvested_job)
stat_sponsored_position | 1585 MB | 4.8 | 92.3
location | 927 MB | 2.8 | 98.7
pg_toast_20174 | 866 MB | 2.6 | 0.3
(page)
pg_toast_20257 | 678 MB | 2.1 | 92.9
(position_index)
harvested_job | 656 MB | 2.0 | 100.0
stat_employer_click | 605 MB | 1.8 | 100.0
usagecount >= 5
relname | pg_size_pretty
-------------------------+----------------
harvested_job | 655 MB
location | 924 MB
pg_toast_19788 | 502 MB
pg_toast_20174 | 215 MB
pg_toast_20257 | 677 MB
position | 8203 MB
stat_employer_click | 605 MB
stat_position_click | 79 MB
stat_sponsored_position | 304 kB
url | 2307 MB
usagecount >= 3
relname | pg_size_pretty
-------------------------+----------------
harvested_job | 656 MB
location | 927 MB
pg_toast_19788 | 1809 MB
pg_toast_20174 | 589 MB
pg_toast_20257 | 679 MB
position | 8258 MB
stat_employer_click | 605 MB
stat_position_click | 716 MB
stat_sponsored_position | 2608 kB
url | 2309 MB
usagecount >= 1
relname | pg_size_pretty
-------------------------+----------------
harvested_job | 656 MB
location | 928 MB
pg_toast_19788 | 3439 MB
pg_toast_20174 | 842 MB
pg_toast_20257 | 680 MB
position | 8344 MB
stat_employer_click | 605 MB
stat_position_click | 4557 MB
stat_sponsored_position | 86 MB
url | 2309 MB
If I'm misreading this, please let me know. I know people also asked about
query plans and schema, which I'm going to look at next; I've just been
knocking off one thing at at time.
Thanks,
Hugh
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Flower | 2019-07-18 22:03:57 | Re: Searching in varchar column having 100M records |
Previous Message | Tomas Vondra | 2019-07-18 12:41:09 | Re: Searching in varchar column having 100M records |