From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Steve Crawford <scrawford(at)pinpointresearch(dot)com> |
Cc: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com>, John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: COPY: row is too big |
Date: | 2017-01-04 16:39:42 |
Message-ID: | 591360d9-00db-4e83-0feb-0b22109414f1@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 01/04/2017 08:32 AM, Steve Crawford wrote:
> ...
>
> Numeric is expensive type - try to use float instead, maybe double.
>
>
> If I am following the OP correctly the table itself has all the
> columns declared as varchar. The data in the CSV file is a mix of
> text, date and numeric, presumably cast to text on entry into the table.
>
>
> But a CSV *is* purely text - no casting to text is needed. Conversion is
> only needed when the strings in the CSV are text representations of
> *non*-text data.
Yeah, muddled thinking.
>
> I'm guessing that the OP is using all text fields to deal with possibly
> flawed input data and then validating and migrating the data in
> subsequent steps. In that case, an ETL solution may be a better
> approach. Many options, both open- closed- and hybrid-source exist.
>
> Cheers,
> Steve
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom DalPozzo | 2017-01-04 16:44:23 | replication slot to be used in the future |
Previous Message | Steve Crawford | 2017-01-04 16:32:42 | Re: COPY: row is too big |