Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
>> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> Do you have the opportunity to try an experiment on hardware
>>> similar to what you're running that on? Create a database with
>>> 7 million tables and see what the dump/restore times are like,
>>> and whether pg_dump/pg_restore appear to be CPU-bound or
>>> memory-limited when doing it.
>
>> If these can be empty (or nearly empty) tables, I can probably
>> swing it as a background task. You didn't need to match the
>> current 1.3 TB database size I assume?
>
> Empty is fine.
After about 15 hours of run time it was around 5.5 million tables;
the rate of creation had slowed rather dramatically. I did create
them with primary keys (out of habit) which was probably the wrong
thing. I canceled the table creation process and started a VACUUM
ANALYZE, figuring that we didn't want any hint-bit writing or bad
statistics confusing the results. That has been running for 30
minutes with 65 MB to 140 MB per second disk activity, mixed read
and write. After a few minutes that left me curious just how big
the database was, so I tried:
select pg_size_pretty(pg_database_size('test'));
I did a Ctrl+C after about five minutes and got:
Cancel request sent
but it didn't return for 15 or 20 minutes. Any attempt to query
pg_locks stalls. Tab completion stalls. (By the way, this is not
related to the false alarm on that yesterday, which was a result of
my attempting tab completion from within a failed transaction, which
just found nothing rather than stalling.)
So I'm not sure whether I can get to a state suitable for starting
the desired test, but I'll stay with a for a while.
-Kevin