From: | Ivan Trofimov <i(dot)trofimow(at)yandex(dot)ru> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: libpq: PQfnumber overload for not null-terminated strings |
Date: | 2024-02-26 23:31:02 |
Message-ID: | 6be93adc-1637-cd02-4cd7-c8cde4f20ad1@yandex.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>> Right now as a library writer in a higher-level language I'm forced to
>> either
>> * Sacrifice performance to ensure 'column_name' is null-terminated
>> (that's what some bindings in Rust do)
>
> I'd go with that. You would have a very hard time convincing me that
> the per-query overhead
I see now that I failed to express myself clearly: it's not a per-query
overhead, but rather a per-result-field one.
Given a code like this (in pseudo-code)
result = ExecuteQuery(some_query)
for (row in result):
a = row["some_column_name"]
b = row["some_other_column_name"]
...
a field-name string should be null-terminated for every field accessed.
There absolutely are ways to write the same in a more performant way and
avoid repeatedly calling PQfnumber altogether, but that I as a library
writer can't control.
In my quickly-hacked-together test just null-terminating a user-provided
string takes ~14% of total CPU time (and PQfnumber itself takes ~30%,
but oh well), please see the code and flamegraph attached.
Attachment | Content-Type | Size |
---|---|---|
pqfnumber_bench.svg | image/svg+xml | 197.3 KB |
pqfnumber_bench.cpp | text/x-c++src | 2.5 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2024-02-27 00:20:46 | Re: WIP Incremental JSON Parser |
Previous Message | Michael Paquier | 2024-02-26 23:29:53 | Re: Injection points: some tools to wait and wake |