From: | Dilip kumar <dilip(dot)kumar(at)huawei(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, Euler Taveira <euler(at)timbira(dot)com(dot)br> |
Subject: | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |
Date: | 2014-12-15 10:48:24 |
Message-ID: | 4205E661176A124FAF891E0A6BA9135266390868@szxeml509-mbs.china.huawei.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On December 2014 17:31 Amit Kapila Wrote,
>I suggest rather than removing, edit the comment to indicate
>the idea behind code at that place.
Done
>Okay, I think this part of code is somewhat similar to what
>is done in pg_dump/parallel.c with some differences related
>to handling of inAbort. One thing I have noticed here which
>could lead to a problem is that caller of select_loop() function
>assumes that return value is less than zero only if there is a
>cancel request which I think is wrong, because select system
>call could also return -1 in case of error. I am referring below
>code in above context:
+ if (i < 0)
+ {
+ /*
+ * This can only happen if user has sent the cancel request using
+ * Ctrl+C, Cancel is handled by 0th slot, so fetch the error result.
+ */
+
+ GetQueryResult(pSlot[0].connection, dbname, progname,
+ completedb);
Now for abort case I am using special error code, and other than that case we will assert, this behavior is same as in pg_dump.
>Hmm, theoretically I think new behaviour could lead to more I/O in
>certain cases as compare to existing behaviour. The reason for more I/O
>is that in the new behaviour, while doing Analyze for a particular table at
>different targets, in-between it has Analyze of different table as well,
>so the pages in shared buffers or OS cache for a particular table needs to
>be reloded again for a new target whereas currently it will do all stages
>of Analyze for a particular table in one-go which means that each stage
>of Analyze could get benefit from the pages of a table loaded by previous
>stage. If you agree, then we should try to avoid this change in new
>behaviour.
I will work on this comment and post the updated patch..
I will move this patch to the latest commitfest.
Regards,
Dilip
Attachment | Content-Type | Size |
---|---|---|
vacuumdb_parallel_v20.patch | application/octet-stream | 27.6 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Atri Sharma | 2014-12-15 10:50:26 | Re: Support UPDATE table SET(*)=... |
Previous Message | Petr Jelinek | 2014-12-15 10:36:53 | Re: Sequence Access Method WIP |