Re: performance of loading CSV data with COPY is 50 times faster than Perl::DBI

From: Ravi Krishna <ravikrishna(at)vivaldi(dot)net>
To: Steven Lembark <lembark(at)wrkhors(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org, Matthias Apitz <guru(at)unixarea(dot)de>
Subject: Re: performance of loading CSV data with COPY is 50 times faster than Perl::DBI
Date: 2020-02-03 16:57:14
Message-ID: B45795C7-ACB5-4CB2-AF8E-FAF8C7757978@vivaldi.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
> Depending on who wrote the code, they may have extracted the rows
> as hashrefs rather than arrays; that can be a 10x slowdown right
> there. [I have no idea why so many people are so addicted to storing
> rows in hashes, but it is always a significant slowdown; and
> array slices are no more complicated than hash slices!]

I have not done perl code for a while now, but most perl coders, already
suffering from a complex of coding in an unreadable language do not
prefer to make it worse by using array which is position dependent
and hence reading it can be a nightmare when large number of cols are
selected.

Also isn't array_ref even better than array, since it avoids copying the data
to your local array in the code.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2020-02-03 17:03:11 Re: Restrict connection from pgadmin.
Previous Message Condor 2020-02-03 16:24:21 Re: How to avoid UPDATE on same data in table ?