From: | Pavan Teja <pavan(dot)postgresdba(at)gmail(dot)com> |
---|---|
To: | "Alex O'Ree" <spyhunter99(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Merging two database dumps |
Date: | 2018-06-13 11:23:52 |
Message-ID: | CACh9nsYEChyYEDDcF8wuE2Oh8qgv8xB2JhpNSV_5z_d_j53FvQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Alex,
For storing duplicate rows, dropping primary and unique indexes is the only
way.
One alternative is create a column with timestamp which updates on every
insert/update so that timestamp will be primary. Hope it helps.
Regards,
Pavan
On Wed, Jun 13, 2018, 4:47 PM Alex O'Ree <spyhunter99(at)gmail(dot)com> wrote:
> I have a situation with multiple postgres servers running all with the
> same databases and table structure. I need to periodically export the data
> from each of there then merge them all into a single server. On occasion,
> it's feasible for the same record (primary key) to be stored in two or more
> servers
>
> I was using pgdump without the --insert option however I just noticed that
> pgrestore will stop inserting into a table when the conflict occurs,
> leaving me with an incomplete set.
>
> Question is what are my other options to skip over the conflicting record
> when merging?
>
> From the docs, it appears that making dumps with the --insert option may
> be the only way to go however performance is an issue. In this case would
> dropping all indexes help?
>
From | Date | Subject | |
---|---|---|---|
Next Message | Vijaykumar Jain | 2018-06-13 11:29:26 | Re: [External] Merging two database dumps |
Previous Message | Alex O'Ree | 2018-06-13 11:17:00 | Merging two database dumps |