From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | "Greg Smith" <greg(at)2ndquadrant(dot)com>, "KaiGai Kohei" <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "KaiGai Kohei" <kaigai(at)kaigai(dot)gr(dot)jp>, "Takahiro Itagaki" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, "Jaime Casanova" <jcasanov(at)systemguards(dot)com(dot)ec> |
Subject: | Re: Largeobject Access Controls (r2460) |
Date: | 2010-01-25 18:24:23 |
Message-ID: | 24518.1264443863@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> It might be better to try a test case with lighter-weight objects,
>> say 5 million simple functions.
> Said dump ran in about 45 minutes with no obvious stalls or
> problems. The 2.2 GB database dumped to a 1.1 GB text file, which
> was a little bit of a surprise.
Did you happen to notice anything about pg_dump's memory consumption?
For an all-DDL case like this, I'd sort of expect the memory usage to
be comparable to the output file size.
Anyway this seems to suggest that we don't have any huge problem with
large numbers of archive TOC objects, so the next step probably is to
look at how big a code change it would be to switch over to
TOC-per-blob.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-01-25 18:31:01 | Re: Fwd: Questions about connection clean-up and "invalid page header" |
Previous Message | Robert Haas | 2010-01-25 18:19:54 | Re: MySQL-ism help patch for psql |