Re: COPY: row is too big

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com>
Cc: John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: COPY: row is too big
Date: 2017-01-04 15:11:06
Message-ID: 68008319-bebf-b2a8-b503-f8c3aa42887e@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 01/04/2017 06:54 AM, Pavel Stehule wrote:
> Hi
>
> 2017-01-04 14:00 GMT+01:00 vod vos <vodvos(at)zoho(dot)com
> <mailto:vodvos(at)zoho(dot)com>>:
>
> __
> Now I am confused about I can create 1100 columns in a table in
> postgresql, but I can't copy 1100 values into the table. And I
> really dont want to split the csv file to pieces to avoid mistakes
> after this action.
>
>
> The PostgreSQL limit is "Maximum Columns per Table250 - 1600 depending
> on column types" - this limit is related to placing values or pointers
> to values to one page (8KB).
>
> You can hit this limit not in CREATE TABLE time, but in INSERT time.
>
>
>
> I create a table with 1100 columns with data type of varchar, and
> hope the COPY command will auto transfer the csv data that contains
> some character and date, most of which are numeric.
>
>
> Numeric is expensive type - try to use float instead, maybe double.

If I am following the OP correctly the table itself has all the columns
declared as varchar. The data in the CSV file is a mix of text, date and
numeric, presumably cast to text on entry into the table.

>
> Regards
>
> Pavel
>
>
> I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
> DELIMITER ';' ;
>
> Then it shows:
>
> ERROR: row is too big: size 11808, maximum size 8160
>
>
>
>
>
>
>
> ---- On 星期二, 03 一月 2017 05:24:18 -0800 *John McKown
> <john(dot)archie(dot)mckown(at)gmail(dot)com
> <mailto:john(dot)archie(dot)mckown(at)gmail(dot)com>>* wrote ----
>
> On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent
> <robjsargent(at)gmail(dot)com <mailto:robjsargent(at)gmail(dot)com>>wrote:
>
> Perhaps this is your opportunity to correct someone else's
> mistake. You need to show the table definition to convince
> us that it cannot be improved. That it may be hard work
> really doesn't mean it's not the right path.
>
>
> ​This may not be possible. The data might be coming in from an
> external source. I imagine you've run into the old "well, _we_
> don't have any problems, so it must be on your end!" scenario.
>
> Example: we receive CSV files from an external source. These
> files are _supposed_ to be validated. But we have often received
> files where NOT NULL fields have "nothing" in them them. E.g. a
> customer bill which has _everything_ in it _except_ the customer
> number (or an invalid one such as "123{"); or missing some other
> vital piece of information.
>
> In this particular case, the OP might want to do what we did in
> a similar case. We had way too many columns in a table. The
> performance was horrible. We did an analysis and, as usual, the
> majority of the selects were for a subset of the columns, about
> 15% of the total. We "split" the table into the "high use"
> columns table & the "low use" columns table. We then used
> triggers to make sure that if we added a new / deleted an old
> row from one table, the corresponding row in the other was
> created / deleted.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list
> (pgsql-general(at)postgresql(dot)org
> <mailto:pgsql-general(at)postgresql(dot)org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> <http://www.postgresql.org/mailpref/pgsql-general>
>
>
>
>
> --
> There’s no obfuscated Perl contest because it’s pointless.
>
> —Jeff Polk
>
> Maranatha! <><
> John McKown
>
>
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2017-01-04 15:19:40 Re: Re: could not load library "$libdir/sslutils": in pg_upgrade process
Previous Message marcin kowalski 2017-01-04 14:57:52 vacuum of empty table slows down as database table count grows