From: | Matthias Apitz <guru(at)unixarea(dot)de> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: type SERIAL in C host-struct |
Date: | 2019-11-13 06:26:44 |
Message-ID: | 20191113062644.GA2989@c720-r342378 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
El día sábado, noviembre 09, 2019 a las 07:45:31p. m. +0100, Matthias Apitz escribió:
> Thanks for the feedback. Client and server run both on the same Linux
> host. I found also the debug log of ESQL/C i.e. how to enable it with
>
> ECPGdebug(int on, FILE *stream);
>
> The resulting log is really fine for debugging our code:
>
> ...
I added time stamps to the logging as:
/tmp/ecpg.log:
...
[18328] [12.11.2019 18:38:44:637]: ecpg_execute on line 120: query: insert into mytypes values ( $1 , $2 , $3 , $4 , $5 ); with 5 parameter(s) on connection newsisis
[18328] [12.11.2019 18:38:46:881]: ecpg_execute on line 120: using PQexecParams
[18328] [12.11.2019 18:38:46:881]: ecpg_free_params on line 120: parameter 1 = 99
[18328] [12.11.2019 18:38:46:881]: ecpg_free_params on line 120: parameter 2 =
[18328] [12.11.2019 18:38:46:881]: ecpg_free_params on line 120: parameter 3 = 2000-01-01
[18328] [12.11.2019 18:38:46:881]: ecpg_free_params on line 120: parameter 4 =
[18328] [12.11.2019 18:38:46:881]: ecpg_free_params on line 120: parameter 5 = \x6c696e6520....
....
[18328] [12.11.2019 18:38:47:084]: ecpg_process_output on line 120: OK: INSERT 0 1
[18328] [12.11.2019 18:38:47:084]: ECPGtrans on line 126: action "commit"; connection "newsisis"
[18328] [12.11.2019 18:38:47:260]: deallocate_one on line 0: name s_statename
[18328] [12.11.2019 18:38:47:263]: ecpg_finish: connection newsisis closed
The above shows an INSERT into a 'bytea' column of ~36 MByte (yes we
have such large XML data for billing/accounting processes in the
database). It takes ~0.2 sec to insert such a monster row.
On INSERT/UPDATE the column data (XML) comes down from the application
layer as a 'char *' pointer. I know with strlen(src) how long it is, I double
its length and add 3 (for the "\x" in front and the '\0' at the end) and malloc
the space for the host variable for INSERT/UPDATE and translate the
octets to hex representation.
When FETCH-ing such a column I do not know the resulting length of the
bytea data for doing a malloc(). I could do before any FETCH a
SELECT octet_length(myblob) FROM mytypes;
but I do not know how expensive this would be.
Any other idea to predict the needed space for the host variable on
FETCH?
matthias
--
Matthias Apitz, ✉ guru(at)unixarea(dot)de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
"Glaube wenig, hinterfrage alles, denke selbst: Wie man Manipulationen durchschaut"
"Believe little, scrutinise all, think by your own: How see through manipulations"
ISBN-10: 386489218X
From | Date | Subject | |
---|---|---|---|
Next Message | Gerrit Fouche | 2019-11-13 10:02:46 | terminated by signal 11: Segmentation fault |
Previous Message | Thomas Munro | 2019-11-13 00:45:59 | Re: here does postgres take its timezone information from? |