From: | Melvin Davidson <melvin6925(at)gmail(dot)com> |
---|---|
To: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
Cc: | senor <frio_cervesa(at)hotmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pg_upgrade --jobs |
Date: | 2019-04-07 20:09:00 |
Message-ID: | CANu8Fiz0+8jA3LNYPfugESrrUG-jzgy-uvF1cpxOq7Agb9T-Vg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> The original scheduled downtime for one installation was 24 hours. By 21
hours it had not >completed the pg_dump schema-only so it was returned to
operation.
To me, your best option is to create a slony cluster with the version you
need to upgrade to.
When slony is in sync, simply make it the master and switch to it. It may
take a while for
slony replication to be in sync, but when it is, there will be very little
down time to switch
over.
Slony <http://www.slony.info/>
On Sun, Apr 7, 2019 at 3:36 PM Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
wrote:
> On 4/7/19 12:05 PM, senor wrote:
> > Thank you Adrian. I'm not sure if I can provide as much as you'd need
> for a definite answer but I'll give you what I have.
> >
> > The original scheduled downtime for one installation was 24 hours. By 21
> hours it had not completed the pg_dump schema-only so it was returned to
> operation.
>
> So this is more then one cluster?
>
> I am assuming the below was repeated at different sites?
>
> > The amount of data per table is widely varied. Some daily tables are
> 100-200GB and thousands of reports tables with stats are much smaller. I'm
> not connected to check now but I'd guess 1GB max. We chose to use the
> --link option partly because some servers do not have the disk space to
> copy. The time necessary to copy 1-2TB was also going to be an issue.
> > The vast majority of activity is on current day inserts and stats
> reports of that data. All previous days and existing reports are read only.
> > As is all too common, the DB usage grew with no redesign so it is a
> single database on a single machine with a single schema.
> > I get the impression there may be an option of getting the schema dump
> while in service but possibly not in this scenario. Plan B is to drop a lot
> of tables and deal with imports later.
>
> I take the above to mean that a lot of the tables are cruft, correct?
>
> >
> > I appreciate the help.
> >
>
>
> --
> Adrian Klaver
> adrian(dot)klaver(at)aklaver(dot)com
>
>
>
--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
From | Date | Subject | |
---|---|---|---|
Next Message | Konstantin Izmailov | 2019-04-07 20:28:46 | Re: assembling PGresults from multiple simultaneous queries (libpq, singlerowmode) |
Previous Message | Adrian Klaver | 2019-04-07 19:35:56 | Re: pg_upgrade --jobs |