Re: COPY: row is too big

From: Steve Crawford <scrawford(at)pinpointresearch(dot)com>
To: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
Cc: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com>, John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: COPY: row is too big
Date: 2017-01-04 16:32:42
Message-ID: CAEfWYyy5WXU__oRf0iMWthpRWZdJRNW2kt9v3da1BswtFKFP1Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

...

> Numeric is expensive type - try to use float instead, maybe double.
>>
>
> If I am following the OP correctly the table itself has all the columns
> declared as varchar. The data in the CSV file is a mix of text, date and
> numeric, presumably cast to text on entry into the table.
>

But a CSV *is* purely text - no casting to text is needed. Conversion is
only needed when the strings in the CSV are text representations of
*non*-text data.

I'm guessing that the OP is using all text fields to deal with possibly
flawed input data and then validating and migrating the data in subsequent
steps. In that case, an ETL solution may be a better approach. Many
options, both open- closed- and hybrid-source exist.

Cheers,
Steve

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2017-01-04 16:39:42 Re: COPY: row is too big
Previous Message Adrian Klaver 2017-01-04 16:13:58 Re: COPY: row is too big