From: | Yuetsuro Kobayashi <inconvenience(dot)notice(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #17089: "ERROR: invalid memory alloc request size ..." occurs when updating a fixed number of records |
Date: | 2021-07-09 04:12:02 |
Message-ID: | CAL2Sk=TbHYSoQp06oT10rB0M10yYZpDNcyvFzyc7XkKun8XJcg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I still can not understand David's answer.
When I first discovered this "ERROR: invalid memory alloc request size
1677721600", I realized that this "ERROR" had a lot of hidden column size
errors, column overflows, etc., and I've already fixed the bug.
The issue with this "ERROR: invalid memory alloc request size 1677721600"
is that it occurs when you use COPY to repeatedly import multiple CSV files.
The "3,211,154" th record = "1607476406180,150,7.2130" is correct data. And
the CSV file containing this record can be imported normally. This record
has only three columns timestamp, ID and number. And all the records are in
the same format.
Furthermore, the imported "3,211,154" th record will be correctly imported
into the table by COPY.
Look at the execution code that causes "ERROR: invalid memory alloc request
size 1677721600" in the code I'm actually processing.
**********************************
-- 1st argument : Windows data Starting point directory full path
-- 2rd argument : database schema name
CREATE PROCEDURE scm_exam.import_related_to_prelit ( p1 text, p2
varchar )
AS $$
DECLARE
argument_absolute_path ALIAS FOR p1;
schema_name ALIAS FOR p2;
(omit definition...)
BEGIN
FOR i IN 1..number_of_directory_array LOOP
directory_name_extraction := directory_name_array[i];
SELECT scm_exam.check_operationid_directory
(directory_name_extraction, schema_name) INTO search_result;
IF search_result <> 0 THEN CONTINUE; ELSE END IF;
current_vehicle_directory_path := argument_absolute_path || '\' ||
directory_name_extraction;
-- *** csv file name only extraction
select array_agg(f) INTO csv_file_name_array from (select f from
(select pg_ls_dir(current_vehicle_directory_path) as f) t where f like
'%csv') a;
IF csv_file_name_array IS NULL THEN CONTINUE; ELSE END IF;
number_of_csvfile_array_count := array_length(csv_file_name_array,
1);
CASE number_of_csvfile_array_count
WHEN 0 THEN
CONTINUE;
WHEN NULL THEN
CONTINUE;
ELSE
END CASE;
-- *** for loop 1 .. csv file count ***
FOR j IN 1..number_of_csvfile_array_count LOOP
csv_name_extraction := csv_file_name_array[j];
OPEN prelit_cursor;
prelit_record_count := COUNT(*) FROM
scm_exam.peripheral_information_mtable;
extracted_csv_file_name := csv_file_name_array[j];
FOR k IN 1..prelit_record_count LOOP
-- *** FETCH record from (PRELIT) cursor position
FETCH prelit_cursor INTO wm_peripheral_inf_id,
wm_peripheral_inf_abbreviation_name, wm_temporary_table_name,
wm_permanent_table_name;
IF wm_peripheral_inf_id IS NULL OR wm_temporary_table_name
= '' THEN EXIT; ELSE END IF;
-- compare position
search_result := POSITION( wm_peripheral_inf_id IN
extracted_csv_file_name );
CASE search_result
WHEN 0 THEN
CONTINUE;
ELSE
--*** Obtain the maximum import number before COPY in
advance, and set the operation ID in the record of the new import number
after creating the table with COPY command.
execute_sql = 'SELECT COUNT(*) FROM scm_exam.' ||
wm_temporary_table_name;
EXECUTE execute_sql INTO
temp_before_import_record_count;
-- copy import
current_csv_file_path := current_vehicle_directory_path
|| '\' || extracted_csv_file_name;
exchange_path := '''' || current_csv_file_path || '''';
execute_sql := 'COPY scm_exam.' ||
wm_temporary_table_name || '( w_jst_timestamp, w_id, w_id_value ) FROM ' ||
exchange_path || ' DELIMITERS '','' WITH csv';
EXECUTE execute_sql;
--*** Obtain the maximum import number before COPY in
advance, and set the operation ID in the record of the new import number
after creating the table with COPY command.
execute_sql = 'SELECT COUNT(*) FROM scm_exam.' ||
wm_temporary_table_name;
EXECUTE execute_sql INTO temp_after_import_record_count;
FOR ii IN
temp_before_import_record_count+1..temp_after_import_record_count LOOP
execute_sql := 'SELECT * FROM scm_exam.' ||
wm_temporary_table_name || ' WHERE w_serial_no = ' || ii;
EXECUTE execute_sql INTO temporary_rec;
IF NOT FOUND
THEN
RAISE WARNING 'PANIC 3900 : % not found import
serial number = %', wm_temporary_table_name, ii;
RETURN;
ELSE
IF temporary_rec.w_jst_timestamp != 0 THEN
memory_jst_timestamp := temporary_rec.w_jst_timestamp; ELSE END IF;
temporary_rec.w_operation_id = '''' ||
directory_name_extraction || '''';
-- Execution code that generates "ERROR:
invalid memory alloc request size 1677721600"
execute_sql := concat('UPDATE scm_exam.',
wm_temporary_table_name, ' SET w_operation_id = ',
temporary_rec.w_operation_id, ', w_jst_timestamp = ', memory_jst_timestamp,
', w_id = ',
temporary_rec.w_id, ', w_id_value = ', '''' || temporary_rec.w_id_value ||
'''', ' WHERE w_serial_no = ', ii);
EXECUTE execute_sql;
END IF;
END LOOP;
EXIT;
END CASE;
END LOOP;
CLOSE prelit_cursor;
END LOOP;
END LOOP;
RETURN;
END;
$$
LANGUAGE 'plpgsql'
**********************************
The import process is called from the parent function by every time the
directory changes, and the import is executed with four FOR statements.
(1) The first FOR statement extracts multiple CSV file names in the
directory as an array.
(2) The second FOR statement retrieves one CSV file name from the array.
(3) The third FOR statement retrieves the temporary table name registered
in the master table and imports the CSV file into the temporary table with
"COPY".
(4) The fourth FOR statement supplements the first column data
(timestamp) of the table imported by "COPY" and UPDATEs it.
I'm new to PostgreSQL and I think I'm inexperienced and can't understand
you.
Please let me know so that I can understand.
2021年7月8日(木) 22:48 David Rowley <dgrowleyml(at)gmail(dot)com>:
> Please keep the list CC'd
>
> On Thu, 8 Jul 2021 at 19:55, 小林優悦郎 <inconvenience(dot)notice(at)gmail(dot)com> wrote:
> >
> > I'm not understand David's answer.
> > Why does the UPDATE process succeed when I import only a CSV file that
> contains the 776,558,539th record information that causes an "ERROR:
> invalid memory alloc request size 167772160" and perform the UPDATE process?
> > 1st directory : xxx_1 (1 - 5,671,588 record)
> > 2nd directory: xxx_2 (1 - 4,917,081 record)
> > ...
> > 13th directory: xxx_13 ( 1 - ... ERROR by 3,211,154th record)
>
> You should look at that record and see if any fields are unusually
> larger than the others. Types like text, bytea and arrays of types
> all have a size limit of 1 Gigabyte in PostgreSQL (internally named
> MaxAllocSize). Going by your original post, in the error message
> your receiving, it seems that something is asking for 1677721600
> bytes, which is about 1.6GB. Since that's larger than the 1GB limit,
> you're getting an error.
>
> You need to look at your data and statements and figure out what might
> attempt to store a field that size in the database.
>
> David
>
From | Date | Subject | |
---|---|---|---|
Next Message | PG Bug reporting form | 2021-07-09 05:34:50 | BUG #17096: Cannot reinstall after uninstalling PostgreSQL on Windows 10 Pro for Workstations |
Previous Message | Michael Paquier | 2021-07-09 02:25:03 | Re: BUG #17083: [PATCH] PostgreSQL fails to build with OpenLDAP 2.5.x |