From: | Stephen Froehlich <s(dot)froehlich(at)cablelabs(dot)com> |
---|---|
To: | pgsql-novice <pgsql-novice(at)postgresql(dot)org> |
Subject: | Replicate a table through a client? |
Date: | 2018-03-20 14:53:56 |
Message-ID: | CY1PR0601MB1927183D2E70FB6573EEA934E5AB0@CY1PR0601MB1927.namprd06.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Hi All,
I have a kind of odd need, I am about to create a database that has a private portion that is stored locally that then has a cleansed public version that we share with a partner. The plan is to host this cleansed public version either through Amazon RDS as a Postgres instance or run a small Amazon instance that will be a Postgres server.
The database is relatively small, less than 20 GB compressed (its currently a series of .RData files).
What is the best way to architect this setup? I can always copy fresh each time through R, but I get the feeling that there must be a more elegant solution.
I would like to avoid running a VPN client on the AWS instance (e.g. have a foreign data wrapper on the AWS instance) and instead have the local server reach out to AWS instead and push the data somehow.
--Stephen
________________________________
Stephen Froehlich
Sr. Strategist, CableLabs(r)
s(dot)froehlich(at)cablelabs(dot)com<mailto:s(dot)froehlich(at)cablelabs(dot)com>
Tel: +1 (303) 661-3708
From | Date | Subject | |
---|---|---|---|
Next Message | JORGE MALDONADO | 2018-03-21 00:10:14 | Re: Deadlocks and transactions |
Previous Message | Lætitia Avrot | 2018-03-20 09:01:03 | Re: Deadlocks and transactions |