From: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
---|---|
To: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Increasing parallel workers at runtime |
Date: | 2017-05-15 14:06:52 |
Message-ID: | CAJrrPGdMs8wzKEWuWgcpjrmopb9rUfnW2UMdzkzMqUMEn3BZxg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
In the current parallel implementation, in case if the
number of planned workers doesn't available during
the start of the query execution, the query starts the
execution with the available number of workers till
the end of the query.
It may be possible that during the query processing
the required number of workers may be available.
so how about increasing the parallel workers to the
planned workers (only when the main backend has
going to wait for the workers send the tuples or it
has to process the plan by itself.)
The wait of the workers to send tuples is may be
because of less number of workers. So increasing
the parallel workers may improve the performance.
POC Patch attached for the same.
This still needs some adjustments to fix for the cases where
the main backend also does the scan instead of waiting for
the workers to finish the job. As increasing the workers logic
shouldn't add an overhead in this case.
Regards,
Hari Babu
Fujitsu Australia
Attachment | Content-Type | Size |
---|---|---|
parallel_workers_increase_at_runtime.patch | application/octet-stream | 2.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-05-15 14:07:08 | Re: Receive buffer size for the statistics socket |
Previous Message | Ildus Kurbangaliev | 2017-05-15 13:54:55 | Re: Bug in ExecModifyTable function and trigger issues for foreign tables |