From: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | Shelby Cain <alyandon(at)yahoo(dot)com>, SCassidy(at)overlandstorage(dot)com, Postgres general mailing list <pgsql-general(at)postgresql(dot)org>, pgsql-general-owner(at)postgresql(dot)org |
Subject: | Re: allow LIMIT in UPDATE and DELETE |
Date: | 2006-05-23 08:30:35 |
Message-ID: | 1148373035.3114.17.camel@coppola.muc.ecircle.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 2006-05-23 at 00:04, Jim C. Nasby wrote:
> I can't imagine how bad this would be if the database actually had
> hour-long reports that had to run... and luckily the system is quiet at
> night when pg_dump runs.
BTW, pg_dump is the only thing that stays in my way with the CLUSTER
strategy against long running transactions, because it locks (in shared
mode, but that's just enough) all dumped tables at the beginning. It
would be nice to have an "exclude-table" option on it. I actually
started working on a patch to allow that, I will make it just good
enough for my purpose (very poor C skills here). Would that be
interesting for others ?
> > Regardless, this type of queue problem can also be tackled by having your data layer persisting the input from the web in memory (which maintains a low perceived response time to the client) and posting to the table as fast as the database allows.
>
> Uh, and just what happens when your web front-end crashes then??
Well, actually we do have some processes where we use a buffer in the
web layer which collects ~1000 entries (or 5 minutes worth, whichever
comes first) in memory and then flushes them to the DB. But the
processing still happens in the DB, as we have a sizable cluster of web
machines and it is still more efficient to handle the processing
centrally than by each machine itself. So we still insert a batch of raw
entries and bulk process them on the DB using stored procedures.
If the web machine crashes, we loose some entries. But in those cases we
have bigger problems too...
We do have however a job table where the whole cluster puts tasks and
the machine which has free resources takes some to process. We first
used JMS for this purpose, but that was just not flexible enough to
accommodate our constraints and rules of load distribution across the
cluster. This is the real bitchy queue table.
Cheers,
Csaba.
From | Date | Subject | |
---|---|---|---|
Next Message | Alban Hertroys | 2006-05-23 10:35:30 | Re: Why won't it index scan? |
Previous Message | Csaba Nagy | 2006-05-23 08:19:25 | Re: allow LIMIT in UPDATE and DELETE |