From: | Dilip kumar <dilip(dot)kumar(at)huawei(dot)com> |
---|---|
To: | Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Cc: | Euler Taveira <euler(at)timbira(dot)com(dot)br> |
Subject: | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |
Date: | 2013-11-08 09:20:10 |
Message-ID: | 4205E661176A124FAF891E0A6BA913526592482E@SZXEML507-MBS.china.huawei.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 08 November 2013 13:38, Jan Lentfer
> For this use case, would it make sense to queue work (tables) in order of their size, starting on the largest one?
> For the case where you have tables of varying size this would lead to a reduced overall processing time as it prevents large (read: long processing time) tables to be processed in the last step. While processing large tables at first and filling up "processing slots/jobs" when they get free with smaller tables one after the other would safe overall execution time.
Good point, I have made the change and attached the modified patch.
Regards,
Dilip
Attachment | Content-Type | Size |
---|---|---|
vacuumdb_parallel_v2.patch | application/octet-stream | 35.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2013-11-08 09:23:37 | Improve code in tidbitmap.c |
Previous Message | Jan Lentfer | 2013-11-08 08:07:55 | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |