Re: CREATE TABLE with parallel workers, 10.0?

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>, Joshua Chamberlain <josh(at)zephyri(dot)co>
Subject: Re: CREATE TABLE with parallel workers, 10.0?
Date: 2017-02-16 01:28:43
Message-ID: 20170216012843.GZ9812@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Andres,

* Andres Freund (andres(at)anarazel(dot)de) wrote:
> On February 15, 2017 5:20:20 PM PST, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >In many cases, I expect this would work just as well, if not better,
> >than trying to actually do writes in parallel.
>
> Why? IPCing tuples around is quite expensive. Or do you mean because it'll be more suitable because of the possible plans?

Because I've seen some serious problems when trying to have multiple
processes writing into the same relation due to the relation extension
lock, cases where it was much faster to have each process write into its
own table. Admittedly, we've improved things there, so perhaps this isn't
an issue any longer, but we also don't yet really know what the locking
implementation looks like yet for having multiple parallel workers
writing into the same relation, so it may be that sending a few records
back to the leader is cheaper than working out the locking to allow
parallel workers to write to the same relation, or at least not any more
expensive.

Thanks!

Stephen

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2017-02-16 01:32:53 Re: CREATE TABLE with parallel workers, 10.0?
Previous Message Andres Freund 2017-02-16 01:22:05 Re: CREATE TABLE with parallel workers, 10.0?