From: | Ulrich Cech <ulrich-news(at)cech-privat(dot)de> |
---|---|
To: | |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large objetcs performance |
Date: | 2007-04-21 07:27:03 |
Message-ID: | 4629BCC7.1090900@cech-privat.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello Alexandre,
<We have an application subjected do sign documents and store them
somewhere.>
I developed a relative simple "file archive" with PostgreSQL (web
application with JSF for user interface). The major structure is one
table with some "key word fields", and 3 blob-fields (because exactly 3
files belong to one record). I have do deal with millions of files (95%
about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file
system (try to open a directory with 300T files on a windows system...
it's horrible, even on the command line).
The database now is 12Gb, but searching with the web interface has a
maximum of 5 seconds (most searches are faster). The one disadvantage is
the backup (I use pg_dump once a week which needs about 10 hours). But
for now, this is acceptable for me. But I want to look at slony or port
everything to a linux machine.
Ulrich
From | Date | Subject | |
---|---|---|---|
Next Message | cluster | 2007-04-21 10:43:27 | Re: FK triggers misused? |
Previous Message | Colin McGuigan | 2007-04-21 05:58:07 | Odd problem with planner choosing seq scan |