From: | Nicolai Petri <nicolai(at)catpipe(dot)net> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PFC <lists(at)peufeu(dot)com> |
Subject: | Re: Faster Updates |
Date: | 2006-06-03 19:05:02 |
Message-ID: | 200606032105.03321.nicolai@catpipe.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Saturday 03 June 2006 17:27, Tom Lane wrote:
> PFC <lists(at)peufeu(dot)com> writes:
> > [snip - complicated update logic proprosal]
> > What do you think ?
>
> Sounds enormously complicated and of very doubtful net win --- you're
>
> [snip - ... bad idea reasoning] :)
What if every backend while processing a transaction collected a list of
touched records - probably with a max number of entries (GUC) collected per
transaction. Then when transaction completes the list of touples are sent to
pg_autovacuum or possible a new process that selectively only went for those
tupples. Of course it should have some kind of logic connected so we don't
visit the tupples for vacuum unless we are quite sure no running transactions
would be blocking adding the blocks to the FSM. We might be able to actually
queue up the blocks until a later time (GUC queue-max-time +
queue-size-limit) if we cannot determine that it would be safe to FSM the
blocks at current time.
I guess this has probably been suggested before and there is probably a reason
why it cannot be done or wouldn't be effective. But it could probably be a
big win in for common workloads like webpages. Where it would be troublesome
is systems with long-running transactions - it might as well just be disabled
there.
Best regards,
Nicolai Petri
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2006-06-03 19:09:45 | Re: COPY (query) TO file |
Previous Message | Hannu Krosing | 2006-06-03 18:46:07 | Re: More thoughts about planner's cost estimates |