From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | "KaiGai Kohei" <kaigai(at)kaigai(dot)gr(dot)jp>, "Greg Smith" <greg(at)2ndquadrant(dot)com>, "KaiGai Kohei" <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Takahiro Itagaki" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, "Jaime Casanova" <jcasanov(at)systemguards(dot)com(dot)ec> |
Subject: | Re: Largeobject Access Controls (r2460) |
Date: | 2010-01-24 17:53:35 |
Message-ID: | 28203.1264355615@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> I'm afraid pg_dump didn't get very far with this before:
> pg_dump: WARNING: out of shared memory
> pg_dump: SQL command failed
> Given how fast it happened, I suspect that it was 2672 tables into
> the dump, versus 26% of the way through 5.5 million tables.
Yeah, I didn't think about that. You'd have to bump
max_locks_per_transaction up awfully far to get to where pg_dump
could dump millions of tables, because it wants to lock each one.
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-01-24 17:54:54 | Re: tab completion for prepared transactions? |
Previous Message | Tom Lane | 2010-01-24 17:50:30 | Re: tab completion for prepared transactions? |