Re: Better performance no-throw conversion?

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Better performance no-throw conversion?
Date: 2021-09-08 17:39:47
Message-ID: CAHOFxGo1NgadCj19irBQJfb_0D5MidMTvVkp6TfX6JWxHEzhRA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Sep 8, 2021 at 11:33 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com> writes:
> > Some databases such as SQLServer (try_cast) or BigQuery (safe.cast)
> offer not-throw conversion.
> > ...
> > I couldn't find a reference to such capabilities in Postgres and
> wondered if I missed it, and if not, is there any plan to add such a
> feature?
>
> There is not anybody working on that AFAIK. It seems like it'd have
> to be done on a case-by-case basis, which makes it awfully tedious.
>

Do you just mean a separate function for each data type? I use similar
functions (without a default value though) to ensure that values extracted
from jsonb keys can be used as needed. Sanitizing the data on input is a
long term goal, but not possible immediately.

Is there any documentation on the impact of many many exception blocks?
That is, if such a cast function is used on a dataset of 1 million rows,
what overhead does that exception incur? Is it only when there is an
exception or is it on every row?

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message ldh@laurent-hasson.com 2021-09-08 17:55:51 RE: Better performance no-throw conversion?
Previous Message Tom Lane 2021-09-08 17:32:58 Re: Better performance no-throw conversion?