From: | Adam Kessel <adam(at)bostoncoop(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Caching Websites |
Date: | 2003-05-09 20:48:49 |
Message-ID: | 20030509204849.GC8583@bostoncoop.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm writing a python script that (among other things) caches websites.
Ultimately, the data is all stored in a string (pickled, possibly
zipped). (lots of related stuff in stored in postgresql tables).
I am wondering whether it would be better to store each website in a
record in a table, or instead have a table which links URLs to filenames
(the file would contain the pickled website). The sites will of course
vary greatly in size, but typically be between 1k and 200k (I probably
won't store anything bigger than that).
This seems like a simple question, and I suspect there's an obvious
answer for which data storage method makes more sense, I just don't know
how to go about researching that. What would be the considerations for
using one method of data storage vs. the other?
Any suggestions for me?
--Adam
From | Date | Subject | |
---|---|---|---|
Next Message | jerome | 2003-05-09 20:52:06 | Re: PG_DUMP too slow... |
Previous Message | Brian Sanders | 2003-05-09 18:46:28 | Opposite value for RESTRICT in foreign keys? |