From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
Cc: | Pavel(dot)Janik(at)linux(dot)cz, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Re: pg_dump and LOs (another proposal) |
Date: | 2000-07-05 17:06:33 |
Message-ID: | 3345.962816793@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
> The thing that bugs me about this if for 30,000 rows, I do 30,000 updates
> after the restore. It seems *really* inefficient, not to mention slow.
Shouldn't be a problem. For one thing, I can assure you there are no
databases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO
infrastructure won't support it. (I think Denis Perchine has started
to work on a replacement one-table-for-all-LOs solution, btw.) Possibly
more to the point, there's no reason for pg_restore to grovel through
the individual rows for itself. Having identified a column that
contains (or might contain) LO OIDs, you can do something like
UPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE
oidcolumn = tmptable.oldLOoid;
which should be quick enough, especially given indexes.
> I'll also have to modify pg_restore to talk to the database directly (for
> lo import). As a result I will probably send the entire script directly
> from withing pg_restore. Do you know if comment parsing ('--') is done in
> the backend, or psql?
Both, I believe --- psql discards comments, but so will the backend.
Not sure you really need to abandon use of psql, though.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2000-07-05 17:22:45 | Re: [HACKERS] Re: Revised Copyright: is this morepalatable? |
Previous Message | The Hermit Hacker | 2000-07-05 17:01:38 | Re: Article on MySQL vs. Postgres |