From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
---|---|
To: | Emmanuel Cecchet <manu(at)frogthinker(dot)org> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Emmanuel Cecchet <manu(at)asterdata(dot)com>, Emmanuel Cecchet <Emmanuel(dot)Cecchet(at)asterdata(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: COPY enhancements |
Date: | 2009-10-13 16:30:27 |
Message-ID: | 87aazv8dek.fsf@hi-media-techno.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Emmanuel Cecchet <manu(at)frogthinker(dot)org> writes:
> Tom was also suggesting 'refactoring COPY into a series of steps that the
> user can control'. What would these steps be? Would that be per row and
> allow to discard a bad tuple?
The idea is to have COPY usable from a general SELECT query so that the
user control what happens. Think of an SRF returning bytea[] or some
variation on the theme.
Maybe WITH to the rescue:
WITH csv AS (
-- no error here as the destination table is in memory tuple store,
-- assuming we have adunstan patch to ignore rows with too few or
-- too many columns
COPY csv(a, b, c, d) FROM STDIN WITH CSV HEADER --- and said options
)
INSERT INTO destination
SELECT a, b, f(a + b - d), strange_timestamp_reader(c)
FROM csv
WHERE validity_check_passes(a, b, c, d);
That offers complete control to the user about the stages that transform
the data. In a previous thread some ideas I forgot the details offered
to the users some more control, but I don't have the time right now to
search in the archives.
Regards,
--
Dimitri Fontaine
PostgreSQL DBA, Architecte
From | Date | Subject | |
---|---|---|---|
Next Message | Kris Jurka | 2009-10-13 16:30:33 | Re: Client application name |
Previous Message | Dave Page | 2009-10-13 16:25:15 | Wire protocol docs |