From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Marc G(dot) Fournier" <scrappy(at)postgresql(dot)org> |
Cc: | Justin Clift <justin(at)postgresql(dot)org>, pgsql-www(at)postgresql(dot)org |
Subject: | Re: pg_autovacuum is nice ... but ... |
Date: | 2004-11-04 01:53:43 |
Message-ID: | 19430.1099533223@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-www |
"Marc G. Fournier" <scrappy(at)postgresql(dot)org> writes:
> Here is a vacuum verbose on gborg's database:
> INFO: free space map: 1000 relations, 7454 pages stored; 23072 total pages needed
> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.
> and this is portal:
> INFO: free space map: 1000 relations, 7425 pages stored; 23024 total pages needed
> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.
> so, you tell me ... should I increase them?
Yup. 20000 < 23072, so you're losing some proportion of FSM entries.
What's worse, the FSM relation table is maxed out (1000 = 1000) which
suggests that there are relations not being tracked at all; you have
no idea how much space is getting leaked in those.
You can determine the number of relations potentially needing FSM
entries by
select count(*) from pg_class where relkind in ('r','i','t');
--- sum over all databases in the cluster to get the right result.
Once you've fixed max_fsm_relations, do vacuums in all databases, and
then vacuum verbose should give you a usable lower bound for
max_fsm_pages.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2004-11-04 02:05:08 | Re: [HACKERS] Possible make_oidjoins_check Security Issue |
Previous Message | Marc G. Fournier | 2004-11-04 01:50:13 | Re: pg_autovacuum is nice ... but ... |
From | Date | Subject | |
---|---|---|---|
Next Message | Marc G. Fournier | 2004-11-04 01:54:18 | Re: Inadequate hosting for www.postgresql.org |
Previous Message | Marc G. Fournier | 2004-11-04 01:50:13 | Re: pg_autovacuum is nice ... but ... |