Re: Backing up 16TB of data (was Re: > 16TB worth of

From: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
To: Jan Wieck <JanWieck(at)Yahoo(dot)com>
Cc: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>, postgres list <pgsql-general(at)postgresql(dot)org>
Subject: Re: Backing up 16TB of data (was Re: > 16TB worth of
Date: 2003-04-25 22:42:16
Message-ID: Pine.LNX.4.33.0304251641080.2484-100000@css120.ihs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 25 Apr 2003, Jan Wieck wrote:

> Ron Johnson wrote:
> >
> > On Mon, 2003-04-21 at 13:23, Jeremiah Jahn wrote:
> > > I have a system that will store about 2TB+ of images per year in a PG
> > > database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
> > > really sure what should be done here. Would life better to not store the
> > > images as BLOBS, and instead come up with some complicated way to only
> > > store the location in the database, or is there someway to have postgres
> > > handle this somehow? What are other people out there doing about this
> > > sort of thing?
> >
> > Now that the hard disk and file system issues have been hashed around,
> > have you thought about how you are going to back up this much data?
>
> Legato had shown a couple years ago already that Networker can backup
> more than a Terabyte per hour. They used an RS6000 with over 100 disks
> and 36 DLT 7000 drives on 16 controllers if I recall correctly ... not
> your average backup solution but it's possible. But I doubt one can
> configure something like this with x86 hardware.

I'm sure you could, but it might well involve 12 PII-350's running a trio
of DLTs each, with a RAID array for caching. :-)

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message nolan 2003-04-25 22:59:01 Re: Backing up 16TB of data (was Re: > 16TB worth of
Previous Message Jan Wieck 2003-04-25 22:20:35 Re: Backing up 16TB of data (was Re: > 16TB worth of