From: | Alan Hodgson <ahodgson(at)simkin(dot)ca> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to store text files in the postgresql? |
Date: | 2009-06-12 16:07:47 |
Message-ID: | 200906120907.47698@hal.medialogik.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Friday 12 June 2009, Scott Ribe <scott_ribe(at)killerbytes(dot)com> wrote:
> > It's far easier to backup and restore a database than millions of small
> > files. Small files = random disk I/O. The real downside is the CPU time
> > involved in storing and retrieving the files. If it isn't a show
> > stopper, then putting them in the database makes all kinds of sense.
>
> On the contrary, I think backup is one of the primary reasons to move
> files *out* of the database. Decent incremental backup software greatly
> reduces the I/O & time needed for backup of files as compared to a pg
> dump. (Of course this assumes the managed files are long-lived.)
We'll have to just disagree on that. You still have to do level 0 backups
occasionally. Scanning a directory tree of millions of files to decide what
to backup for an incremental can take forever. And restoring millions of
small files can take days.
But I concede there are good arguments for the filesystem approach;
certainly it's not a one size fits all problem. If your files are mostly
bigger than a few MB each, then the filesystem approach is probably better.
And of course big database tables get unwieldy too, for indexing and
vacuuming - I wouldn't necessarily put most files into the large object
interface, just the ones too big to want to fetch all in one piece.
--
WARNING: Do not look into laser with remaining eye.
From | Date | Subject | |
---|---|---|---|
Next Message | Bryan Murphy | 2009-06-12 16:08:35 | Re: Having trouble restoring our backups |
Previous Message | aryoo | 2009-06-12 15:56:01 | WITH RECURSIVE clause -- all full and partial paths |