Re: Bulk persistence strategy

From: Riaan Stander <rstander(at)exa(dot)co(dot)za>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Bulk persistence strategy
Date: 2017-05-21 19:29:53
Message-ID: 0e47ae52-e80a-fd8e-021c-7c8cc0495b26@exa.co.za
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> Riaan Stander <rstander(at)exa(dot)co(dot)za> writes:
>> I've come up with generating functions on the go, but I'm concerned about
>> the performance impact of this. I first wanted to use an anonoumys code
>> block, but then I cannot do parameter binding from npgsql.
>> ...
>> Is there a better way I'm missing and is "temp" function creation in
>> Postgres a big performance concern, especially if a server is under load?
> The function itself is only one pg_proc row, but if you're expecting
> to do this thousands of times a minute you might have to adjust autovacuum
> settings to avoid bad bloat in pg_proc.
>
> If you're intending that these functions be use-once, it's fairly unclear
> to me why you bother, as opposed to just issuing the underlying SQL
> statements.
>
> regards, tom lane

The intended use is use-once. The reason is that the statements might
differ per call, especially when we start doing updates. The ideal would
be to just issue the sql statements, but I was trying to cut down on
network calls. To batch them together and get output from one query as
input for the others (declare variables), I have to wrap them in a
function in Postgres. Or am I missing something? In SQL Server TSQL I
could declare variables in any statement as required.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2017-05-21 22:37:14 Re: Bulk persistence strategy
Previous Message Tom Lane 2017-05-21 14:33:06 Re: Bulk persistence strategy