From: | "Daniel Verite" <daniel(at)manitou-mail(dot)org> |
---|---|
To: | "Adrian Klaver" <adrian(dot)klaver(at)aklaver(dot)com> |
Cc: | "Charles Martin" <ssappeals(at)gmail(dot)com>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>,"pgsql-general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Trouble Upgrading Postgres |
Date: | 2018-11-06 16:27:57 |
Message-ID: | 89b5b622-4c79-4c95-9ad4-b16d0d0daf9b@manitou-mail.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Adrian Klaver wrote:
> > So there's no way it can deal with the contents over 500MB, and the
> > ones just under that limit may also be problematic.
>
> To me that looks like a bug, putting data into a record you cannot get out.
Strictly speaking, it could probably get out with COPY in binary format,
but pg_dump doesn't use that.
It's undoubtedly very annoying that a database can end up with
non-pg_dump'able contents, but it's not an easy problem to solve.
Some time ago, work was done to extend the 1GB limit
but eventually it got scratched. The thread in [1] discusses
many details of the problem and why the proposed solution
were mostly a band aid. Basically, the specs of COPY
and other internal aspects of Postgres are from the 32-bit era when
putting the size of an entire CDROM in a single row/column was not
anticipated as a valid use case.
It's still a narrow use case today and applications that need to store
big pieces of data like that should slice them in chunks, a bit like in
pg_largeobject, except in much larger chunks, like 1MB.
[1] pg_dump / copy bugs with "big lines" ?
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite
From | Date | Subject | |
---|---|---|---|
Next Message | Marcio Meneguzzi | 2018-11-06 16:33:32 | Re: PgAgent on Windows |
Previous Message | Stéphane Dunand | 2018-11-06 16:14:51 | Re: PgAgent on Windows |