From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | jay <jackem(dot)mojx(at)alibaba-inc(dot)com>, "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: [PERFORM] 答复: [PERFORM] Postgresql update op is very very slow |
Date: | 2008-06-26 16:02:31 |
Message-ID: | dcc563d10806260902o581ec3afud773867c1d0d2c69@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2008/6/26 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> "jay" <jackem(dot)mojx(at)alibaba-inc(dot)com> writes:
>> I know the problem, because there are about 35 million rows , which
>> cost about 12G disk space and checkpoint segments use 64, but update
>> operation is in one transaction which lead fast fill up the checkpoint
>> segments and lead do checkpoints frequently, but checkpoints will cost lots
>> resources, so update operation become slowly and slowly and bgwrite won't
>> write because it's not commit yet.
>> Create a new table maybe a quick solution, but it's not appropriated in some
>> cases.
>> If we can do commit very 1000 row per round, it may resolve the
>> problem.
>
> No, that's utterly unrelated. Transaction boundaries have nothing to do
> with checkpoints.
True. But if you update 10000 rows and vacuum you can keep the bloat
to something reasonable.
On another note, I haven't seen anyone suggest adding the appropriate
where clause to keep from updating rows that already match. Cheap
compared to updating the whole table even if a large chunk aren't a
match. i.e.
... set col=0 where col <>0;
That should be the first thing you reach for in this situation, if it can help.
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Wakeling | 2008-06-26 16:14:06 | Re: Hardware vs Software RAID |
Previous Message | Scott Marlowe | 2008-06-26 15:59:53 | Re: [PERFORM] Re: [PERFORM] 答复: [PERFORM] Postgresql update op is very very slow |