Re: Re: The fastest way to update thousands of rows in moderately sized table

From: 林士博 <lin(at)repica(dot)co(dot)jp>
To: twoflower <standa(dot)kurik(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: The fastest way to update thousands of rows in moderately sized table
Date: 2015-07-24 08:04:10
Message-ID: CACudzGiMtb5xhNrD9H_c3dVMdrdNcEbjZdksoO9KtdF0XCav1g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

OK.
In your case, I can not see any reasons that
using a temp table to do joint-update is faster than directly updating.

And from the execution plan, index scan just takes very little time.
Most of the time is doing insert/delete.

As you mentioned, fk_assignmentwhere is updated frequently,
and disabling indexes before updating is not an option you can choose,
try setting fillfactor of this table to a smaller value.

And maybe you can also check the time of inserting 8920 rows to this table.
If it is far less than 35s.
you can consider to do update in this way:
1.create a temp table with all columns of fk_assignmentwhere,
likes
create unlogged table "temp_table" as
select id, ..... , 1000 as fk_job
from "TRANSLATION"
where fk_job = 1000;
2.delete rows from original table.
3.inert rows from temp table to original table.

At the end , may be you can check if postgresql can insert start from the
position of HWM(High Water Mark).

2015-07-24 15:58 GMT+09:00 twoflower <standa(dot)kurik(at)gmail(dot)com>:

> 林士博 wrote
> Can you post execution plan of the original update sql. EXPLAIN (ANALYZE
> ON, BUFFERS ON) update "TRANSLATION" set fk_assignmentwhere fk_job = 1000;
>
> Here it is:
>
> Update on "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual
> time=35091.036..35091.036 rows=0 loops=1)
> Buffers: shared hit=74842343 read=7242 dirtied=7513
> -> Index Scan using "TRANSLATION_idx_composite_job_last_revision" on
> "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual
> time=0.042..24.147 rows=8920 loops=1)
> Index Cond: (fk_job = 59004)
> Buffers: shared hit=626
>
> Planning time: 0.362 ms Execution time: 35091.192 ms
> ------------------------------
> View this message in context: Re: The fastest way to update thousands of
> rows in moderately sized table
> <http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859197.html>
> Sent from the PostgreSQL - general mailing list archive
> <http://postgresql.nabble.com/PostgreSQL-general-f1843780.html> at
> Nabble.com.
>

--
─repica group──────────────────
▼ポイント×電子マネー×メールで店舗販促に必要な機能を全て提供!
【point+plus】http://www.repica.jp/pointplus/

▼フォローアップメールや外部連携に対応!
【mail solution】http://ms.repica.jp/

▼9年連続シェアNo.1 個人情報漏えい対策ソフト
【P-Pointer】http://ppointer.jp/

▼単月導入可能!AR動画再生アプリ
【marcs】http://www.arappli.com/service/marcs/

▼ITビジネスを創造しながら未来を創る
【VARCHAR】http://varchar.co.jp/
───────────────────────────

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message twoflower 2015-07-24 08:18:44 Re: The fastest way to update thousands of rows in moderately sized table
Previous Message twoflower 2015-07-24 06:58:11 Re: The fastest way to update thousands of rows in moderately sized table