From: | Mark Dilger <hornschnorter(at)gmail(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Alexander Kukushkin <cyberdemn(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speeding up pg_upgrade |
Date: | 2017-12-08 17:15:52 |
Message-ID: | 3AF2BA4A-7A8F-48D9-9DB5-B256F7AE7EC9@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On Dec 7, 2017, at 10:24 PM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
>
> On Thu, Dec 7, 2017 at 10:37:30AM -0500, Stephen Frost wrote:
>> Alexander,
>>
>> * Alexander Kukushkin (cyberdemn(at)gmail(dot)com) wrote:
>>> Couple of months ago we at Zalando upgraded a few databases of different
>>> sizes to 9.6.
>>
>> Thanks for sharing your experience!
>>
>>> During preparations to the I've found 2.5 pain-points:
>>>
>>> 1. We are using schema-based api deployment. Basically ~every week we
>>> create a new schema in the database and hundreds of stored procedures in it.
>>> Off course we remove old API schemas and trying not to keep more than
>>> last 10. Before the upgrade we basically dropped all API schemas except the
>>> one used in production.
>>> And even in this case dump-restore phase was taking much more time than
>>> relinking of datafiles.
>>> Unfortunately I don't have any numbers right now, but usually run of
>>> pg_upgrade was taking about 30-35 seconds, and about 2/3 of the time was
>>> spend in dump-restore.
>>
>> Ok, so eliminating 2/3 of the time would mean bringing it down to more
>> like 10 seconds. That certainly seems worthwhile to me. With the
>> linking time being much less than the dump/restore, we could at least
>> consider moving forward with Bruce's original idea where we do the
>> dump/restore while the system is online but then the linking with it
>> offline and get a serious performance boost out of it. That also avoids
>> the issue with new files showing up while the system is running that I
>> brought up when we were talking about having the linking done with the
>> system online.
>>
>>> 2 ANALYZE phase is a pain. I think everybody agrees with it.
>>>
>>> 2.5 Usually ANALYZE stage 1 completes quite fast and performance becomes
>>> reasonable, except one case: some of the columns might have non default
>>> statistics target.
>>
>> Ok, if the stage-1 is very fast and performance is reasonable enough
>> after that then perhaps it's not so bad to keep it as-is for now and
>> focus on the dump/restore time. That said, we should certainly also
>> work on improving this too.
>
> I think the big problem with two-stage pg_upgrade is that the user steps
> are more complex, so what percentage of users are going use the
> two-stage method. The bad news is that only a small percentage of users
> who will benefit from it will use it, and some who will not benefit it
> will use it. Also, this is going to require significant server changes,
> which have to be maintained.
In my fork of the project, back when I was tracking 9.5, I added an option
to vacuum/analyze to make it behave a bit more like autovac, so that I could
run
ANALYZE CONDITIONALLY;
and it would only analyze those tables in the system which autovac would
analyze. In the grammar, CONDITIONALLY gets translated into a
VacuumOption flag. In vacuum (in src/backend/commands/vacuum.c), inside
the "Loop to process each selected relation", if this flag is set, it checks the
PgStat_StatTabEntry for the table to determine whether to vacuum or analyze
the table.
I think this extension would be helpful in the context of the current conversation.
In those cases where pg_upgrade was able to migrate the statistics to the
new database, as long as it set the PgStat_StatTabEntry for each table where
statistics were migrated, then the user would just have to execute a
"VACUUM CONDITIONALLY" after upgrade, and the database would either
do a lot of analyze work, a little analyze work, or no analyze work depending
on which tables needed analyzing.
The main advantage here is that the user would always run this command
after pg_upgrade, without having to think about whether pg_upgrade had
migrated statistics or not.
If the community thinks this is useful, I could put together a patch.
mark
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2017-12-08 17:21:52 | Re: Speeding up pg_upgrade |
Previous Message | Robert Haas | 2017-12-08 17:05:52 | Re: Out of date comment in cached_plan_cost |