From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
Cc: | Ravi Krishna <srkrishna(at)yahoo(dot)com>, Alban Hertroys <haramrae(at)gmail(dot)com>, PG mailing List <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Load data from a csv file without using COPY |
Date: | 2018-06-20 03:17:09 |
Message-ID: | 20180620031709.GF20245@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jun 19, 2018 at 02:32:10PM -0700, David G. Johnston wrote:
> You really need to describe what you consider to be a "real life
> scenario"; and probably give a better idea of creation and number of these
> csv files. In addition to describing the relevant behavior of the
> application you are testing.
>
> If you want maximum realism you should probably write integration tests for
> your application and then execute those at high volume.
>
> Or at minimum give an example of the output you would want from this
> unknown program...
Hard to say what you are especially looking for that psql's \copy cannot
do, but perhaps you have an interest in pg_bulkload? Here is a link to
the project:
https://github.com/ossc-db/pg_bulkload/
It has a couple of fancy features as well, like preventing failures of
rows if loading a large file, etc.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2018-06-20 03:38:00 | Re: using pg_basebackup for point in time recovery |
Previous Message | Ravi Krishna | 2018-06-19 23:20:11 | Re: Load data from a csv file without using COPY |