From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | vod vos <vodvos(at)zoho(dot)com> |
Cc: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Steve Crawford <scrawford(at)pinpointresearch(dot)com>, John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: COPY: row is too big |
Date: | 2017-01-05 13:19:56 |
Message-ID: | CAFj8pRB0J=Ln97yQCY8tq4DK7GO5jOzH8BvXXHS7UZERN0ApAg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
2017-01-05 13:44 GMT+01:00 vod vos <vodvos(at)zoho(dot)com>:
> I finally figured it out as follows:
>
> 1. modified the corresponding data type of the columns to the csv file
>
> 2. if null values existed, defined the data type to varchar. The null
> values cause problem too.
>
int, float, double can be null too - null needs same space (1bit) for all
types
Regards
Pavel
> so 1100 culumns work well now.
>
> This problem wasted me three days. I have lots of csv data to COPY.
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | junior | 2017-01-05 14:15:01 | Re: Queries on very big table |
Previous Message | marcin kowalski | 2017-01-05 12:47:40 | Re: vacuum of empty table slows down as database table count grows |