From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: a back up question |
Date: | 2017-12-06 14:57:15 |
Message-ID: | 20171206145715.GA4628@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
John, all,
* John R Pierce (pierce(at)hogranch(dot)com) wrote:
> On 12/5/2017 2:09 PM, Martin Mueller wrote:
> >Time is not really a problem for me, if we talk about hours rather
> >than days. On a roughly comparable machine I’ve made backups of
> >databases less than 10 GB, and it was a matter of minutes. But I
> >know that there are scale problems. Sometimes programs just hang
> >if the data are beyond some size. Is that likely in Postgres if
> >you go from ~ 10 GB to ~100 GB? There isn’t any interdependence
> >among my tables beyond queries I construct on the fly, because I
> >use the database in a single user environment
>
> another factor is restore time. restores have to create
> indexes. creating indexes on multi-million-row tables can take
> awhile. (hint, be sure to set maintenance_work_mem to 1GB before
> doing this!)
I'm sure you're aware of this John, but for others following along, just
to be clear: indexes have to be recreated when restoring from a
*logical* (eg: pg_dump based) backups. Indexes don't have to be
recreated for *physical* (eg: file-based) backups.
Neither pg_dump nor the various physical-backup utilities should hang or
have issues with larger data sets.
Thanks!
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Vick Khera | 2017-12-06 14:57:45 | Re: a back up question |
Previous Message | David G. Johnston | 2017-12-06 14:52:09 | Re: Why the planner does not use index for a large amount of data? |