From: | stan <stanb(at)panix(dot)com> |
---|---|
To: | Ron <ronljohnsonjr(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Isolation of multiple databse instances provided by a single postgres server |
Date: | 2019-11-21 11:55:49 |
Message-ID: | 20191121115549.GA3417@panix.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Nov 20, 2019 at 04:24:40PM -0600, Ron wrote:
> On 11/20/19 4:03 PM, stan wrote:
> > I am working on a fairly small application to use for managing a companies
> > business.
> >
> > I have a "production" instance hosted by one of the cloud providers, and 2
> > other instances. This is fairly new to me. In the past, I have created
> > applications by keeping a set of scripts that can be used to rebuild the
> > database, and pg_dump to restore the date. Based on some recommendations I
> > am using pg_basebackup to backup the production instance nightly. My
> > background is primarily Oracle. I realize looking at the way pg_basebackup
> > works that multiple database instances, provided by one server are actually
> > stored in the same physical OS files.
> >
> >
> > We have traditionally (in the Postgres world) had a sandbox, that we used
> > for upgrades, and testing development methodologies, and this seems to be
> > supported pretty well by pg_dump.
> >
> > Now that I know "too much" I am concerned about hosting the sandbox on the
> > same Postgres instance.
>
> What specifically do you mean by "instance"??? (I know what it means in the
> SQL Server world, and in Postgres all the databases accessible via a single
> $PGDATA are called a *cluster*.)
Sorry for my incorrect terminology. I am probably confused about the
technology here. Let me try to explain what I think I understand.
It seems to me that I can have one Postgres "server" running listening on a
single port on a single machine. It appears that the data files for this
"server" are managed internally by the Postgres server instance, and I
have no control of what is stored where in them. In an Oracle world, I can
create tablespaces, which have a relationship to OS files, and I can
explicitly control what objects are stored in which tablespaces (OS file),
thus, for example, when I do a hot backup, I put a specific tablespaces in
backup mode, and can then safely copy this OS file (yes I have to properly
deal with archive logs). Thus I would be somewhat comfortable have to
distinct "instance: provided by that one Oracle "server".
It appears to me that, within this one Postgres "instance", there are 2
levels of "isolation", which are database, and schemas. Is this correct? If
so, how does this cores pond to physical on disk storage?
>
> > Recognizing that this is a fairly small application, what are wiser folks
> > than I recommendations?
> >
> > Should I run the sandbox from different Postgres server, possibly even on a
> > different machine? Is pg_dump still good way to move the production
> > instance to the sandbox, and perhaps even the other way around?
>
> Running CAT, STG, UAT, DEV, etc on different VMs is certainly one solution,
> isolating them from each other.
Makes sense.
>
> OTOH, you can initdb multiple clusters on the same host, accessing them via
> different $PGDATA variables and port numbers.
That is consistent with what I thought I understood.
Thanks for taking time to educate me.
--
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin
From | Date | Subject | |
---|---|---|---|
Next Message | Imre Samu | 2019-11-21 12:39:34 | Re: How should I specify work_mem/max_worker_processes if I want to do big queries now and then? |
Previous Message | Thomas Kellerer | 2019-11-21 09:54:38 | Re: Return Table in StoredProceure/Function |