Hello,
 
a 'bulk loading' routine using temporary tables suddenly failed during tests for no obvious reasons. At some point one of the updates on the temporary table fails with the error message "ERROR:  could not open file "base/...".
 
-Execute is used for executing the statements.
-Autocommit mode is used because of possible large load size.
-At the beginning of a load script another script drops all temporary tables - using a select on pg_tables for getting the table names.
-Then 'normal' temporary tables (with no 'ON COMMIT DROP') with a fixed name are created and filled via insert first - followed by several updates. One of those updates fails.
 
An strace revealed that another process dropped the temporary table in question before the failing update - most likely the drop script. This also happens when in the same transaction. Additionally we used ON COMMIT DROP and DISCARD TEMP and DISCARD PLANS instead of dropping all temporary tables. However, DISCARD TEMP obviously blocks the INSERT when within a transaction.
 
Removing the drop script and dropping the temporary tables created at the beginning of the load at the end seems to help or am I mistaken?
 
So is every 'JDBC' execute using its own thread/process? If so is there a way to force them to execute in succession and not in parallel/concurrently?
 
Could you tell me, please?
 
Thank you very much!
 
Best wishes,
 
Peter