| From: | Hannu Krosing <hannu(at)tm(dot)ee> |
|---|---|
| To: | Josh Berkus <josh(at)agliodbs(dot)com> |
| Cc: | Tambet Matiisen <t(dot)matiisen(at)aprote(dot)ee>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: One tuple per transaction |
| Date: | 2005-03-17 21:27:25 |
| Message-ID: | 1111094845.6281.1.camel@fuji.krosing.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On L, 2005-03-12 at 14:05 -0800, Josh Berkus wrote:
> Tambet,
>
> > In one of our applications we have a database function, which
> > recalculates COGS (cost of good sold) for certain period. This involves
> > deleting bunch of rows from one table, inserting them again in correct
> > order and updating them one-by-one (sometimes one row twice) to reflect
> > current state. The problem is, that this generates an enormous amount of
> > tuples in that table.
>
> Sounds like you have an application design problem ... how about re-writing
> your function so it's a little more sensible?
Also, you could at least use a temp table for intermediate steps. This
will at least save WAL traffic.
--
Hannu Krosing <hannu(at)tm(dot)ee>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Manfred Koizar | 2005-03-18 10:34:03 | Re: multi-column index |
| Previous Message | Michael Fuhr | 2005-03-17 18:26:53 | Re: cpu_tuple_cost |