From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | stange(at)rentec(dot)com |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: understanding the interaction with delete/select/vacuum |
Date: | 2005-08-29 19:52:13 |
Message-ID: | 12520.1125345133@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Alan Stange <stange(at)rentec(dot)com> writes:
> I have a long running process which does a 'SELECT ID FROM T'. The
> results are being streamed to the client using a fetch size limit. This
> process with take 26 hours to run. It turns out that all the "C" and
> "P" are going to be deleted when the SELECT gets to them.
> Several hours into this process, after the "C" rows have been deleted in
> a separate transaction but we haven't yet gotten to the "P" rows, a
> vacuum is begun on table T.
> What happens?
VACUUM can't remove any rows that are still potentially visible to any
open transaction ... so those rows will stay. It's best to avoid having
single transactions that take 26 hours to run --- there are a lot of
other inefficiencies that will show up in such a situation. Can you
break the long-running process into shorter transactions?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alan Stange | 2005-08-29 20:09:51 | Re: understanding the interaction with delete/select/vacuum |
Previous Message | Oren Mazor | 2005-08-29 19:18:48 | Re: understanding the interaction with delete/select/vacuum |