From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | boleslaw(dot)ziobrowski(at)yahoo(dot)pl |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #14384: pg_dump uses excessive amounts of memory for LOBs |
Date: | 2016-10-20 13:23:27 |
Message-ID: | 29613.1476969807@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
boleslaw(dot)ziobrowski(at)yahoo(dot)pl writes:
> pg_dump seems to allocate memory proportional to the number of rows in
> pg_largeobject (not necessarily correlated with size of these objects) ,
Yes, it does. It also allocates memory proportional to the number of,
eg, tables, or any other DB object for that matter.
This is a consequence of the fact that blobs grew owners and privileges
in 9.0. pg_dump uses its usual per-object infrastructure to keep track
of that. The argument was that this'd be okay because if your large
objects are, well, large, then there couldn't be so many of them that
the space consumption would be fatal. I had doubts about that at the
time, but I think we're more or less locked into it now. It would
take a lot of restructuring to change it, and we'd lose functionality
too, because we couldn't have a separate TOC entry per blob. That
means no ability to select out individual blobs during pg_restore.
TL;DR: blobs are not exactly lightweight objects. If you want something
with less overhead, maybe you should just store the data in a plain
bytea column.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | andre_mikulec | 2016-10-20 13:26:52 | BUG #14387: Trying to send mail to general@postgresql.org |
Previous Message | mg | 2016-10-20 09:20:24 | BUG #14386: Run-time error '3018' with psqlODBC Driver using VB6 and DAO |