Re: COPY: row is too big

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: rob stone <floriparob(at)gmail(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com>
Cc: John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: COPY: row is too big
Date: 2017-01-04 16:13:58
Message-ID: 70aee7e1-7cee-9efd-2b64-05a14a81b1f1@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 01/04/2017 08:00 AM, rob stone wrote:
> Hello,
> On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote:
>> On 01/04/2017 06:54 AM, Pavel Stehule wrote:
>>> Hi
>>>
>>> 2017-01-04 14:00 GMT+01:00 vod vos <vodvos(at)zoho(dot)com
>>> <mailto:vodvos(at)zoho(dot)com>>:
>>>
>>> __
>>> Now I am confused about I can create 1100 columns in a table in
>>> postgresql, but I can't copy 1100 values into the table. And I
>>> really dont want to split the csv file to pieces to avoid
>>> mistakes
>>> after this action.
>>>
>>>
>>> The PostgreSQL limit is "Maximum Columns per Table250 - 1600
>>> depending
>>> on column types" - this limit is related to placing values or
>>> pointers
>>> to values to one page (8KB).
>>>
>>> You can hit this limit not in CREATE TABLE time, but in INSERT
>>> time.
>>>
>>>
>>>
>>> I create a table with 1100 columns with data type of varchar,
>>> and
>>> hope the COPY command will auto transfer the csv data that
>>> contains
>>> some character and date, most of which are numeric.
>>>
>>>
>>> Numeric is expensive type - try to use float instead, maybe double.
>>
>> If I am following the OP correctly the table itself has all the
>> columns
>> declared as varchar. The data in the CSV file is a mix of text, date
>> and
>> numeric, presumably cast to text on entry into the table.
>>
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>>
>>> I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
>>> DELIMITER ';' ;
>>>
>>> Then it shows:
>>>
>>> ERROR: row is too big: size 11808, maximum size 8160
>>>
>>>
>
>
> Assuming this is a brand new database instance and not an existing
> application, could the OP not compile from source and specify the
> --with-blocksize=16384 so as to overcome the 8k default page size
> limit?

Well I was thinking along those lines also, then I did a search on
BLCKSZ in the docs and saw all the configuration parameters that are
keyed off it. I know I would have to do a lot more homework to
understand the implications to the database instance as a whole and
whether it was worth it to accommodate a single table.

>
> My 2 cents.
> Rob
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Steve Crawford 2017-01-04 16:32:42 Re: COPY: row is too big
Previous Message rob stone 2017-01-04 16:00:16 Re: COPY: row is too big