From: | John Naylor <john(dot)naylor(at)2ndquadrant(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WIP: Avoid creation of the free space map for small tables |
Date: | 2019-01-28 19:06:49 |
Message-ID: | CACPNZCv4THn_a5oXNt0mJppn03sJ-tZtKMwsRPXEZqXmLAu1Dg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 28, 2019 at 12:10 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Mon, Jan 28, 2019 at 10:03 AM John Naylor
> <john(dot)naylor(at)2ndquadrant(dot)com> wrote:
> >
> 1.
> @@ -26,7 +26,7 @@
> pg_relation_size('fsm_check_size', 'fsm') AS fsm_size;
> heap_size | fsm_size
> -----------+----------
> - 24576 | 0
> + 32768 | 0
> (1 row)
>
> -- Extend table with enough blocks to exceed the FSM threshold
> @@ -56,7 +56,7 @@
> SELECT pg_relation_size('fsm_check_size', 'fsm') AS fsm_size;
> fsm_size
> ----------
> - 16384
> + 24576
> (1 row)
>
>
> As discussed on another thread, this seems to be due to the reason
> that a parallel auto-analyze doesn't allow vacuum to remove dead-row
> versions. To fix this, I think we should avoid having a dependency on
> vacuum to remove dead rows.
Ok, to make the first test here more reliable I will try Andrew's idea
to use fillfactor to save free space. As I said earlier, I think that
second test isn't helpful and can be dropped.
> 2.
> @@ -15,13 +15,9 @@
> SELECT octet_length(get_raw_page('test_rel_forks', 'main', 100)) AS main_100;
> ERROR: block number 100 is out of range for relation "test_rel_forks"
> SELECT octet_length(get_raw_page('test_rel_forks', 'fsm', 0)) AS fsm_0;
> - fsm_0
> --------
> - 8192
> -(1 row)
> -
> +ERROR: could not open file "base/50769/50798_fsm": No such file or directory
> SELECT octet_length(get_raw_page('test_rel_forks', 'fsm', 10)) AS fsm_10;
> -ERROR: block number 10 is out of range for relation "test_rel_forks"
> +ERROR: could not open file "base/50769/50798_fsm": No such file or directory
>
> This indicates that even though the Vacuum is executed, but the FSM
> doesn't get created. This could be due to different BLCKSZ, but the
> failed machines don't seem to have a non-default value of it. I am
> not sure why this could happen, maybe we need to check once in the
> failed regression database to see the size of relation?
I'm also having a hard time imagining why this failed. Just in case,
we could return ctid in a plpgsql loop and stop as soon as we see the
5th block. I've done that for some tests during development and is a
safer method anyway.
> <timing failures in 3 and 4>
>
> @@ -377,20 +383,9 @@ RelationGetBufferForTuple(Relation relation, Size len,
> +
> + /*
> + * In case we used an in-memory map of available blocks, reset it
> + * for next use.
> + */
> + if (targetBlock < HEAP_FSM_CREATION_THRESHOLD)
> + FSMClearLocalMap();
> +
>
> I think here you need to clear the map if it exists or clear it
> unconditionally, the earlier one would be better.
Ok, maybe all callers should call it unconditonally, but within the
function, check "if (FSM_LOCAL_MAP_EXISTS)"?
Thanks for investigating the failures -- I'm a bit pressed for time this week.
--
John Naylor https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2019-01-28 19:47:15 | Re: Rename nodes/relation.h => nodes/pathnodes.h ? |
Previous Message | Robert Haas | 2019-01-28 19:01:21 | Re: Early WIP/PoC for inlining CTEs |