From: | "Sriram Dandapani" <sdandapani(at)counterpane(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: out of memory error with large insert |
Date: | 2006-03-22 01:03:54 |
Message-ID: | 6992E470F12A444BB787B5C937B9D4DF03C48C1A@ca-mail1.cis.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Some more interesting information.
The insert statement is issued with a jdbc callback to the postgres
database (because the application requires partial commits...equivalent
of autonomous transactions)
What I noticed was that the writer process when using the jdbc insert
was very active consuming a lot of memory
When I attempted the same insert within pgadmin manually, the writer
process was not on the top's list of processes.
Wonder if the jdbc callback causes Postgres to allocate memory
differently.
-----Original Message-----
From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us]
Sent: Tuesday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] out of memory error with large insert
"Sriram Dandapani" <sdandapani(at)counterpane(dot)com> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster than retail checks
anyway ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-03-22 03:24:21 | Re: Re out of memory error with large insert |
Previous Message | Sriram Dandapani | 2006-03-22 00:16:29 | Re out of memory error with large insert |