From: | John McKown <jmckown(at)prodigy(dot)net> |
---|---|
To: | PostgreSQL general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Loading the database, question about my method |
Date: | 2000-08-18 23:49:37 |
Message-ID: | Pine.LNX.4.21.0008181840590.6484-100000@linux2.johnmckown.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I have a sequential file on my work system (OS/390). This file contains
multiple record types which I want to load into multiple tables in a
PostgreSQL database. I have written a program which runs on the OS/390
system. This program reads the sequential file and reformats it. What I do
is reformat it so that a "copy <tablename> from stdin;" is generated
followed by the reformatted records for that table. This works and is
quite fast. However, I was wondering if anybody would know how much slower
it would be to use standard SQL INSERT statements would be. Basically, my
input file would look something like:
BEGIN;
INSERT .....
INSERT .....
COMMIT;
Again, I kinda like this approach because it is "standard" SQL and not
specific to PostgreSQL (not that I plan to use anything else,
personally). The reason I'd like a standard method is because there might
be some interest by other OS/390 users in this code for other databases
which they might have.
Just curious,
John McKown
From | Date | Subject | |
---|---|---|---|
Next Message | Webmaster MuraNet | 2000-08-19 00:21:38 | BIG PROBLEM !!! |
Previous Message | Mitch Vincent | 2000-08-18 23:26:07 | Re: Re: [GENERAL] I screwed it up, my installation :( |