From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_background (and more parallelism infrastructure patches) |
Date: | 2014-09-26 18:48:03 |
Message-ID: | CA+TgmoYz-PkawVnLLYES5MszPWthu=g8ynTDmVHUECh4Cns7Bg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Sep 20, 2014 at 3:03 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Okay, but as there is no predictability (it can be either same as what
> launching process has at the when it has launched background worker
> or it could be some different value if got changed later due to sighup)
> which GUC value will be used by background worker, it might be good
> to clarify the same in pg_bacground docs (which are not there currently,
> but I assume it will eventually be part of this patch).
OK, I will mention that in the documentation when I write it. I
didn't sweat that too much originally because I wasn't sure how much
churn there was going to be in the user-visible API, but so far
everybody seems happy with that, so maybe it's time to go document it.
It's a pretty simple API but, as you say, there are a few details
worth mentioning. I still need some review of the earlier patches in
the series before this really gets urgent, though: so far no one has
commented on #1, #2, #4, or #5, and I'm not entirely whether my
revised version of #3 passed muster.
> Keeping transaction control (Start/commit) outside the function
> execute_sql_string() could lead to EndCommand() message being
> sent before transaction end which could be a problem in case
> transaction commit fails due to any reason. exec_simple_query() takes
> care of the same by calling finish_xact_command() before reporting
> command-complete for last parse tree. It even has comment indicating
> that we should follow this protocol.
Fixed in the attached version.
> Won't CommandCounterIncrement() required after every command like
> we do in exec_simple_query()?
Fixed in the attached version.
> Whats the need to reverse the order of execution for EndCommand()
> and PortalDrop()? Any error in PortalDrop() will lead to wrong
> message being sent to client.
Fixed in the attached version.
> What is the reason for not logging statements executed via
> pg_background, even though it allows to report the sql statement
> to various monitoring tools by setting debug_query_string?
I wasn't really sure whether core GUCs should affect the behavior of a
contrib module, and I wasn't excited about duplicating the code.
> Isn't it better to add a comment why execute_sql_string() uses
> binary format for result?
Done in the attached version.
> Sure, please take a call based on what you feel is right here, I
> mentioned it because I felt it might be little bit easier for other people
> to understand that code.
I played around with this a bit but it didn't seem like it worked out
to a win. There were a bunch of things that had to be passed down
into that function and it didn't seem like it was really reducing
complexity. What I think makes sense is to keep an eye on the
complexity of the handling for each individual message type and move
any handlers that get complex to their own functions.
> There can be certain scenarios where user might like to invoke this
> again. Assume, user calls function
> pg_background_launch('select count(*) from t1') and this statement
> execution via background worker is going to take very long time before
> it could return anything. After sometime user tries to retrieve data via
> pg_background_result(), but the call couldn't came out as it is waiting
> for results, so user presses cancel and on again trying after sometime,
> he won't get any data. I think this behaviour is bit annoying.
Yep. I don't have a better solution at the moment, but there may be one.
> To avoid user to wait for longer, function pg_background_result()
> can take an additional parameter where user can specify whether
> to WAIT incase results are not available.
That gets complicated. Until any results are available? Until all
results are available? What if we try to read from the queue to find
out if results are available, and the first message in the queue is
long enough that it wraps the queue, so that we have to block and wait
for the background worker to send more data before we can continue?
> Why FATAL inside background worker is not propagated at same level by
> launcher process?
> If PANIC in background worker can kill other backends and restart server
> then ideally FATAL in background worker should also be treated at same
> level by client backend.
It was initially like that, but then I realized it was silly. If the
background worker hits some error that forces its session to
terminate, there is no need to terminate the user's session too - and
in fact doing so is really annoying, as I rapidly found out while
experimenting with this. Generally a FATAL is something we do because
backend-local state is corrupted to a degree that makes it impractical
to continue, but the fact that that other backend is messed up does
not mean our backend is messed up too.
> Any error ("unable to map dynamic shared memory segment") before
> pq_redirect_to_shm_mq() will not reach launcher. Launcher client will
> get "ERROR: lost connection to worker process with PID 4020".
>
> I think it is very difficult for user to know why such an error has
> occurred and what he can do to proceed. I am not sure if there is any
> sensible way to report such an error, but OTOH it seems we should
> provide some information regarding what has happened to client.
I don't think this is really a fixable problem. There's no way to
communicate an error that happens before you establish communications.
The same problem exists for user connections, but it's not serious in
practice because it's rare. I expect the same to be true here.
> postgres=# select * from pg_background_result(4672) as (result TEXT);
> WARNING: unknown message type: G (6 bytes)
> ERROR: there is no client connection
> CONTEXT: COPY t1, line 1: ""
>
> Something similar to what is returned for transaction statements
> ("transaction control statements are not allowed in pg_background")
> would be better.
Fixed in the attached version.
> If you have to discard results of statements other than last,
> then why at first place you want to allow multiple statements?
You can run statements with side effects, or you can run multiply
utility commands.
> Like in below case, how will user identify whether the result is
> for first statement or second statement?
By reading the documentation that I will write.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachment | Content-Type | Size |
---|---|---|
0006-pg_background-Run-commands-in-a-background-worker-an.patch | text/x-patch | 31.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | David G Johnston | 2014-09-26 18:50:56 | Re: proposal: rounding up time value less than its unit. |
Previous Message | Alvaro Herrera | 2014-09-26 18:47:42 | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |