From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | arnaulist(at)andromeiberica(dot)com |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Problems restoring big tables |
Date: | 2007-01-06 03:02:45 |
Message-ID: | 25554.1168052565@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Arnau <arnaulist(at)andromeiberica(dot)com> writes:
> I have to restore a database that its dump using custom format (-Fc)
> takes about 2.3GB. To speed the restore first I have restored everything
> except (played with pg_restore -l) the contents of some tables that's
> where most of the data is stored.
I think you've outsmarted yourself by creating indexes and foreign keys
before loading the data. That's *not* the way to make it faster.
> pg_restore: ERROR: out of memory
> DETAIL: Failed on request of size 32.
> CONTEXT: COPY statistics_operators, line 25663678: "137320348 58618027
I'm betting you ran out of memory for deferred-trigger event records.
It's best to load the data and then establish foreign keys ... indexes
too. See
http://www.postgresql.org/docs/8.2/static/populate.html
for some of the underlying theory. (Note that pg_dump/pg_restore
gets most of this stuff right already; it's unlikely that you will
improve matters by manually fiddling with the load order. Instead,
think about increasing maintenance_work_mem and checkpoint_segments,
which pg_restore doesn't risk fooling with.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Geoffrey | 2007-01-06 10:01:37 | Re: vacuum fails with 'invalid page header' message |
Previous Message | Tom Lane | 2007-01-06 02:47:42 | Re: Can't See Data - Plz Help! |