Re: Breaking up a PostgreSQL COPY command into chunks?

From: Francisco Olarte <folarte(at)peoplecall(dot)com>
To: Victor Hooi <victorhooi(at)yahoo(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Breaking up a PostgreSQL COPY command into chunks?
Date: 2013-11-08 07:36:00
Message-ID: CA+bJJbwjJ2iuVDqKtMe=e6aOoNNLOhHsOHcxCCUosDtD2DB6-Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Nov 8, 2013 at 5:09 AM, Victor Hooi <victorhooi(at)yahoo(dot)com> wrote:
> They think that it might be limited by the network, and how fast the
> PostgreSQL server can push the data across the internet. (The Postgres
> server and the box running the query are connected over the internet).

You previously said you had 600Mb. Over the internet. ¿ Is it a very
fat pipe ? Because otherwise the limitng factor is probably not the
speed at which postgres can push the resuts, but he throughput of your
link.

If, as you stated, you need a single transaction to get a 600Mb
snapshot I would recommend to dump it to disk, compressing on the fly
( you should get easily four o five fold reduction on a CSV file using
any decent compressor ), and then send the file. If you do not have
disk for the dump but can run programs near the server, you can try
compressing on the fly. If you have got none of this but have got
space for a spare table, use a select into, paginate this output and
drop it after. Or just look at the configs and set longer query times,
if your app NEEDS two hour queries, they can be enabled. But anyway,
doing a long transaction over the internet does not seem like a good
idea to me.

Francisco Olarte

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Raphael Bauduin 2013-11-08 08:03:19 Re: problem with partitioned table and indexed json field
Previous Message Victor Hooi 2013-11-08 04:09:18 Re: Breaking up a PostgreSQL COPY command into chunks?