Re: pg_dump: largeobject behavior issues (possible bug)

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Gierth <andrew(at)tao11(dot)riddles(dot)org(dot)uk>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump: largeobject behavior issues (possible bug)
Date: 2015-04-25 17:14:13
Message-ID: 553BCB65.6090706@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On 04/25/2015 12:32 PM, Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> On 04/24/2015 06:41 PM, Tom Lane wrote:
>>> Yeah, this was brought up when we added per-large-object metadata; it was
>>> obvious that that patch would cause pg_dump to choke on large numbers of
>>> large objects. The (perhaps rather lame) argument was that you wouldn't
>>> have that many of them.
>>> Given that large objects don't have any individual dependencies,
>>> one could envision fixing this by replacing the individual large-object
>>> DumpableObjects by a single placeholder to participate in the sort phase,
>>> and then when it's time to dump that, scan the large objects using a
>>> cursor and create/print/delete the information separately for each one.
>>> This would likely involve some rather painful refactoring in pg_dump
>>> however.
>> I think we need to think about this some more, TBH, I'm not convinced
>> that the changes made back in 9.0 were well conceived. Having separate
>> TOC entries for each LO seems wrong in principle, although I understand
>> why it was done.
> Perhaps. One advantage of doing it this way is that you can get
> pg_restore to extract a single LO from an archive file; though it's
> debatable whether that's worth the potential resource-consumption hazards.

In my view it isn't worth it.

> Another issue is that restore options such as --no-owner and
> --no-privileges would not work for LOs (at least not without messy hacks)
> if we go back to a scheme where all the LO information is just SQL
> commands inside a single TOC object.
>
> After further thought I realized that if we simply hack pg_dump to emit
> the LOs in a streaming fashion, but keep the archive-file representation
> the same as it is now, then we haven't really fixed the problem because
> pg_restore is still likely to choke when it tries to read the archive's
> TOC. So my proposal above isn't enough either.

Yep, that's certainly true.

>
> Perhaps what we need is some sort of "second-level TOC" which is only ever
> processed in a streaming fashion, by both pg_dump and pg_restore. This
> would not support dependency resolution or re-ordering, but we don't need
> those abilities for LOs.
>
>

+1, I had a similar thought, half-formed, but you've expressed it better
than I could have.

cheers

andrew

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2015-04-25 17:19:17 Re: forward vs backward slashes in msvc build code
Previous Message Tom Lane 2015-04-25 16:32:22 Re: pg_dump: largeobject behavior issues (possible bug)