Re: COPY: row is too big

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: vod vos <vodvos(at)zoho(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: COPY: row is too big
Date: 2017-01-02 17:13:45
Message-ID: 02e8b1dc-ae5a-a600-85fa-bfce558748ea@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 01/02/2017 09:03 AM, vod vos wrote:
> You know, the csv file was exported from other database of a machine, so
> I really dont want to break it for it is a hard work. Every csv file
> contains headers and values. If I redesign the table, then I have to cut
> all the csv files into pieces one by one.

If it helps:

http://csvkit.readthedocs.io/en/latest/tutorial/1_getting_started.html#csvcut-data-scalpel
>
>
> ---- On 星期一, 02 一月 2017 08:21:29 -0800 *Tom Lane
> <tgl(at)sss(dot)pgh(dot)pa(dot)us>* wrote ----
>
> vod vos <vodvos(at)zoho(dot)com <mailto:vodvos(at)zoho(dot)com>> writes:
> > When I copy data from csv file, a very long values for many
> columns (about 1100 columns). The errors appears:
> > ERROR: row is too big: size 11808, maximum size 8160
>
> You need to rethink your table schema so you have fewer columns.
> Perhaps you can combine some of them into arrays, for example.
> JSON might be a useful option, too.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org
> <mailto:pgsql-general(at)postgresql(dot)org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rob Sargent 2017-01-02 20:57:08 Re: COPY: row is too big
Previous Message vod vos 2017-01-02 17:03:07 Re: COPY: row is too big