Re: Generate test data inserts - 1GB

From: Shital A <brightuser2019(at)gmail(dot)com>
To: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Generate test data inserts - 1GB
Date: 2019-08-09 16:51:21
Message-ID: CAMp7vw__HWnAU-g2vVpFPg=8S_FHR7i0wpWUEqphwTYmNtKH8w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 9 Aug 2019, 21:25 Adrian Klaver, <adrian(dot)klaver(at)aklaver(dot)com> wrote:

> On 8/9/19 8:14 AM, Shital A wrote:
> >
>
> > Hello,
>
> >
> > 4) What techniques have you tried?
> > Insert into with With statement, inserting 2000000 rows at a time. This
> > takes 40 mins.
> >
>
> To add to my previous post. If you already have data in a Postgres
> database then you could do:
>
> pg_dump -d db -t some_table -a -f test_data.sql
>
> That will dump the data only for the table in COPY format. Then you
> could apply that to your test database(after TRUNCATE on table, assuming
> you want to start fresh):
>
> psql -d test_db -f test_data.sql
>
>
>
>
> --
> Adrian Klaver
> adrian(dot)klaver(at)aklaver(dot)com

Thanks for the reply Adrian.

Missed one requirement. Will these methods generate wal logs needed for
replication?

Actually the data is to check if replication catches up. Below is scenario :

1. Have a master slave cluster with replication setup

2. Kill master so that standby takes over. We are using pacemaker for auto
failure.
Insert 1 GB data in new master while replication is broken.

3 Start oldnode as standby and check if 1GB data gets replicated.

As such testing might be frequent we needed to spend minimum time in
generating data.
Master slave are in same network.

Thanks !

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rob Sargent 2019-08-09 16:59:38 Re: Generate test data inserts - 1GB
Previous Message Adrian Klaver 2019-08-09 15:55:00 Re: Generate test data inserts - 1GB