Re: Cryptohash OpenSSL error queue in FIPS enabled builds

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Daniel Gustafsson <daniel(at)yesql(dot)se>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Cryptohash OpenSSL error queue in FIPS enabled builds
Date: 2022-04-22 17:01:51
Message-ID: 3704258.1650646911@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Daniel Gustafsson <daniel(at)yesql(dot)se> writes:
> It turns out that OpenSSL places two errors in the queue for this operation,
> and we only consume one without clearing the queue in between, so we grab an
> error from the previous run.

Ugh.

> Consuming all (both) errors and creating a concatenated string seems overkill
> as it would alter the API from a const error string to something that needs
> freeing etc (also, very few OpenSSL consumers actually drain the queue, OpenSSL
> themselves don't). Skimming the OpenSSL code I was unable to find another
> example of two errors generated. The attached calls ERR_clear_error() as how
> we do in libpq in order to avoid consuming earlier errors.

This seems quite messy. How would clearing the queue *before* creating
the object improve matters? It seems like that solution means you're
leaving an extra error in the queue to break unrelated code. Wouldn't
it be better to clear after grabbing the error? (Or maybe do both.)

Also, a comment seems advisable.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacek Trocinski 2022-04-22 17:31:29 Why is EXECUTE granted to PUBLIC for all routines?
Previous Message vignesh C 2022-04-22 16:28:22 Re: Handle infinite recursion in logical replication setup