From: | Paul Ramsey <pramsey(at)cleverelephant(dot)ca> |
---|---|
To: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
Cc: | James Sewell <james(dot)sewell(at)lisasoft(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Aggregate |
Date: | 2016-03-14 19:56:14 |
Message-ID: | CACowWR1CNsuCv4t=C-eRmtx3vH34fqpK8boCOOptO413Ar5eyQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Mar 13, 2016 at 7:31 PM, David Rowley
<david(dot)rowley(at)2ndquadrant(dot)com> wrote:
> On 14 March 2016 at 14:52, James Sewell <james(dot)sewell(at)lisasoft(dot)com> wrote:
>> One question - how is the upper limit of workers chosen?
>
> See create_parallel_paths() in allpaths.c. Basically the bigger the
> relation (in pages) the more workers will be allocated, up until
> max_parallel_degree.
Does the cost of the aggregate function come into this calculation at
all? In PostGIS land, much smaller numbers of rows can generate loads
that would be effective to parallelize (worker time much >> than
startup cost).
P
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2016-03-14 20:02:00 | Re: Background Processes and reporting |
Previous Message | Vladimir Borodin | 2016-03-14 19:54:06 | Re: Background Processes and reporting |