Christian Ramseyer <rc(at)networkz(dot)ch> writes:
> Can I somehow influence the client:UTF8->server:LATIN1 character set
> conversion so that instead of failing, it inserts an invalid codepoint
> character, the utf8 hex bytes as string, drops the character or
> something like that?
There's nothing built-in for that, but it seems like it wouldn't be
hard to modify the code if you wanted a quick hack to do this.
In general, the system nominally supports multiple conversion functions
per encoding pair, so you could imagine having an alternate conversion
that doesn't throw errors. Problem is that it's quite difficult to get
the system to actually *use* a non-default conversion for anything really
significant, like say client I/O. I don't know that anyone's thought
hard about how to improve that.
regards, tom lane