From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Riaan Stander <rstander(at)exa(dot)co(dot)za> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Bulk persistence strategy |
Date: | 2017-05-21 22:37:14 |
Message-ID: | 30684.1495406234@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Riaan Stander <rstander(at)exa(dot)co(dot)za> writes:
> The intended use is use-once. The reason is that the statements might
> differ per call, especially when we start doing updates. The ideal would
> be to just issue the sql statements, but I was trying to cut down on
> network calls. To batch them together and get output from one query as
> input for the others (declare variables), I have to wrap them in a
> function in Postgres. Or am I missing something? In SQL Server TSQL I
> could declare variables in any statement as required.
Hm, well, feeding data forward to the next query without a network
round trip is a valid concern.
How stylized are these commands? Have you considered pushing the
generation logic into the function, so that you just have one (or
a few) persistent functions, and the variability slack is taken
up through EXECUTE'd strings? That'd likely be significantly
more efficient than one-use functions. Even disregarding the
pg_proc update traffic, plpgsql isn't going to shine in that usage
because it's optimized for repeated execution of functions.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Riaan Stander | 2017-05-22 02:14:35 | Re: Bulk persistence strategy |
Previous Message | Riaan Stander | 2017-05-21 19:29:53 | Re: Bulk persistence strategy |