Re: Question about loading up a table

From: vinny <vinny(at)xs4all(dot)nl>
To: Alex Samad <alex(at)samad(dot)com(dot)au>
Cc: PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org>
Subject: Re: Question about loading up a table
Date: 2017-07-31 09:31:09
Message-ID: f8a27af49c1594c927de38373fde43dc@xs4all.nl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 2017-07-31 11:02, Alex Samad wrote:
> Hi
>
> I am using pg_dump | psql to transfer data from my old 9.2 psql into a
> 9.6 psql.
>
> The new DB server is setup as master replicating to a hot standby
> server.
>
> What I have noticed is that the rows don't get replicated over until
> the copy from stdin is finished... hard to test when you have M+ lines
> of rows.

If you are "just testing" then you could use the COPY command
https://www.postgresql.org/docs/9.2/static/sql-copy.html
to generate a smaller dataset.

>
> Is there a way to tell the master to replicate earlier

I highly doubt it, because the master cannot know what to replicate
until
your transaction is ended with a COMMIT. If you end with ROLLBACK,
or your last query is DELETE FROM (your_table>;
then there isn't even anything to replicate at all...

> or is there a
> way to get pg_dump to bundle into say 100K rows at a time ?

I'm not aware of such a feature, it would be quite tricky because
of dependencies between records. You cannot simply dump the first 100k
rows
from table A and the first 100k from table B, because row #9 from table
A
may have a relation to row 100.001 from table B.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Thomas Güttler 2017-07-31 10:45:24 Re: Row based permissions: at DB or at Application level?
Previous Message Alex Samad 2017-07-31 09:02:54 Question about loading up a table