From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Doug McNaught <doug(at)mcnaught(dot)org> |
Cc: | Adam Kessel <adam(at)bostoncoop(dot)net>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Caching Websites |
Date: | 2003-05-12 15:42:24 |
Message-ID: | Pine.LNX.4.33.0305120941040.26708-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12 May 2003, Doug McNaught wrote:
> Adam Kessel <adam(at)bostoncoop(dot)net> writes:
>
> > Based on the documetation, I don't immediately see any disadvantage to
> > using these large objects--does anyone else see why I might not want to
> > store archived websites in large objects?
>
> It's going to be (probably) a little slower than the filesystem
> solution, and backups are a little more involved (you can't use
> pg_dumpall) but everything works--I have been using LOs with success
> for a couple years now.
If the files aren't too big (under a meg or so each) you can either try
bytea encoding / bytea field types, or you can base64 encode, escape, and
store it in a text field. Since pgsql autocompresses text fields, the
fact that base64 is a little bigger is no big deal.
The advantage to storing them in bytea or text with base64 is that
pg_dump backs up your whole database.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2003-05-12 15:42:43 | Re: Performance Problem |
Previous Message | Larry Rosenman | 2003-05-12 15:35:00 | Re: epoch to timestamp |