From: | rob stone <floriparob(at)gmail(dot)com> |
---|---|
To: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com> |
Cc: | John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: COPY: row is too big |
Date: | 2017-01-04 16:00:16 |
Message-ID: | 1483545616.2970.1.camel@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote:
> On 01/04/2017 06:54 AM, Pavel Stehule wrote:
> > Hi
> >
> > 2017-01-04 14:00 GMT+01:00 vod vos <vodvos(at)zoho(dot)com
> > <mailto:vodvos(at)zoho(dot)com>>:
> >
> > __
> > Now I am confused about I can create 1100 columns in a table in
> > postgresql, but I can't copy 1100 values into the table. And I
> > really dont want to split the csv file to pieces to avoid
> > mistakes
> > after this action.
> >
> >
> > The PostgreSQL limit is "Maximum Columns per Table250 - 1600
> > depending
> > on column types" - this limit is related to placing values or
> > pointers
> > to values to one page (8KB).
> >
> > You can hit this limit not in CREATE TABLE time, but in INSERT
> > time.
> >
> >
> >
> > I create a table with 1100 columns with data type of varchar,
> > and
> > hope the COPY command will auto transfer the csv data that
> > contains
> > some character and date, most of which are numeric.
> >
> >
> > Numeric is expensive type - try to use float instead, maybe double.
>
> If I am following the OP correctly the table itself has all the
> columns
> declared as varchar. The data in the CSV file is a mix of text, date
> and
> numeric, presumably cast to text on entry into the table.
>
> >
> > Regards
> >
> > Pavel
> >
> >
> > I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
> > DELIMITER ';' ;
> >
> > Then it shows:
> >
> > ERROR: row is too big: size 11808, maximum size 8160
> >
> >
Assuming this is a brand new database instance and not an existing
application, could the OP not compile from source and specify the
--with-blocksize=16384 so as to overcome the 8k default page size
limit?
My 2 cents.
Rob
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2017-01-04 16:13:58 | Re: COPY: row is too big |
Previous Message | Peter J. Holzer | 2017-01-04 15:58:37 | Re: COPY: row is too big |