Re: Question re large objects

From: Stephen van Egmond <svanegmond(at)bang(dot)dhs(dot)org>
To: Mitch Vincent <mitch(at)venux(dot)net>
Cc: svanegmond(at)home(dot)com, chriswood(at)wvda(dot)com, pgsql-php(at)postgresql(dot)org
Subject: Re: Question re large objects
Date: 2000-11-28 19:34:10
Message-ID: 20001128143410.A30737@bang.dhs.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-php

Mitch Vincent (mitch(at)venux(dot)net) wrote:

> > Because you will lose the images when you do a restore from backup.
> > And you will have to restore from backup eventually, count on it.
>
> I think one should always plan for the worst case scenerio, that's exactly
> thye you do backups, so you don't lose data.. Why would he lose data if he's
> preforming backups and restored from that backup..

BLOBs are not dumped from pgsql.

This might be because there's no valid SQL to create BLOBs, and since
pgsql dumps are supposed to be SQL, it just doesn't work.

> Again, I always think one should make an application scaleable but having
> said that, I think what you're mentioning here is a cart before the horse
> situation. Even saying he needed to load balance I'd never use NFS, ever.
> Large RAID arrays and such could provide all the storage needed --
> especially since we're just talking about images here.

I'm referring to multiple serving machines due to CPU or local disk
capacity.

> I'd suggest that you don't use OIDs as binding record IDs, make another
> integer field for that. There is an option to pg_dump to preserve OIDs even
> if you do.

I don't think you understand large objects. When you create one, you
get an OID. When you want to retrieve it, you hand it the OID.
End of story. And, again, they are dumped by pg_dump.

In response to

Responses

Browse pgsql-php by date

  From Date Subject
Next Message Mitch Vincent 2000-11-28 20:07:04 Re: Question re large objects
Previous Message Mitch Vincent 2000-11-28 19:16:54 Re: Question re large objects