From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Joachim Wieland <joe(at)mcknight(dot)de> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net> |
Subject: | Re: patch for parallel pg_dump |
Date: | 2012-03-29 10:33:38 |
Message-ID: | CA+TgmoaBbtaiQLmjgDqy=9aJJOFyA6Ugt2BY-B5ds2BuZ_pr_A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 28, 2012 at 9:54 PM, Joachim Wieland <joe(at)mcknight(dot)de> wrote:
> On Wed, Mar 28, 2012 at 1:46 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I'm wondering if we really need this much complexity around shutting
>> down workers. I'm not sure I understand why we need both a "hard" and
>> a "soft" method of shutting them down. At least on non-Windows
>> systems, it seems like it would be entirely sufficient to just send a
>> SIGTERM when you want them to die. They don't even need to catch it;
>> they can just die.
>
> At least on my Linux test system, even if all pg_dump processes are
> gone, the server happily continues sending data. When I strace an
> individual backend process, I see a lot of Broken pipe writes, but
> that doesn't stop it from just writing out the whole table to a closed
> file descriptor. This is a 9.0-latest server.
Wow, yuck. At least now I understand why you're doing it like that.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-03-29 10:38:40 | Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation) |
Previous Message | Marko Kreen | 2012-03-29 10:12:31 | Re: Standbys, txid_current_snapshot, wraparound |