Re: Caching Websites

From: Ericson Smith <eric(at)did-it(dot)com>
To: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
Cc: Doug McNaught <doug(at)mcnaught(dot)org>, Adam Kessel <adam(at)bostoncoop(dot)net>, Postgresql General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Caching Websites
Date: 2003-05-12 17:27:42
Message-ID: 1052760462.6710.13.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Maybe a little out of the loop... but if you're caching website stuff
(html?, xml?), then it might be best not to use the Database. If your DB
goes down... your content site goes down too.

I remember a project a little while back where we actually used plain
ol, DBM files to cache the content. It was tens of times faster than the
database, and would stay up no matter what.

I see what your're saying about the LO's but IMHO, the DB is not the
best place for cached content.

- Ericson Smith
eric(at)did-it(dot)com

On Mon, 2003-05-12 at 12:04, scott.marlowe wrote:
> On 12 May 2003, Doug McNaught wrote:
>
> > "scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> >
> > > The advantage to storing them in bytea or text with base64 is that
> > > pg_dump backs up your whole database.
> >
> > It does with LOs too; you just have to use the -o option and either
> > the 'custom' or 'tar' format rather than straight SQL.
>
> Cool. I could of sworn that you had to back them up seperately. Was that
> the case at one time?
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
--
Ericson Smith <eric(at)did-it(dot)com>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message scott.marlowe 2003-05-12 17:27:57 Re: Caching Websites
Previous Message Doug McNaught 2003-05-12 17:23:54 Re: Caching Websites