From: | David Garamond <lists(at)zara(dot)6(dot)isreserved(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: how many record versions |
Date: | 2004-05-24 15:40:10 |
Message-ID: | 40B2175A.4070909@zara.6.isreserved.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greg Stark wrote:
> Another option is simply logging this data to a text file. Or multiple text
Yes, this is what we've been doing recently. We write to a set of text
files and there's a process to commit to MySQL every 2-3 minutes (and if
the commit fails, we write to another text file to avoid the data being
lost). It works but I keep thinking how ugly the whole thing is :-)
> files one per server. Then you can load the text files with batch loads
> offline. This avoids slowing down your servers handling the transactions in
> the critical path. But it's yet more complex with more points for failure.
--
dave
From | Date | Subject | |
---|---|---|---|
Next Message | Ericson Smith | 2004-05-24 15:42:42 | Re: Clustering Postgres |
Previous Message | Philip | 2004-05-24 14:51:17 | Re: pg_dump error |