From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Fields, Zachary J(dot) (MU-Student)" <zjfe58(at)mail(dot)missouri(dot)edu>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Seg fault when processing large SPI cursor (PG9.13) |
Date: | 2013-03-04 16:20:38 |
Message-ID: | CAHyXU0z_OE5zKoASof0rcvTUOH3f3dihtTqt6f=dUdqQ3P7iSA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 4, 2013 at 10:04 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "Fields, Zachary J. (MU-Student)" <zjfe58(at)mail(dot)missouri(dot)edu> writes:
>> I'm working on PostgreSQL 9.13 (waiting for admin to push upgrades next week), in the meanwhile, I was curious if there are any known bugs regarding large cursor fetches, or if I am to blame.
>> My cursor has 400 million records, and I'm fetching in blocks of 2^17 (approx. 130K). When I fetch the next block after processing the 48,889,856th record, then the DB seg faults. It should be noted, I have processed tables with 23 million+ records several times and everything appears to work great.
>
>> I have watched top, and the system memory usage gets up to 97.6% (from approx 30 million records onward - then sways up and down), but ultimately crashes when I try to get past the 48,889,856th record. I have tried odd and various block sizes, anything greater than 2^17 crashes at the fetch that would have it surpassed 48,889,856 records, 2^16 hits the same sweet spot, and anything less than 2^16 actually crashes slightly earlier (noted in comments in code below).
>
>> To me, it appears to be an obvious memory leak,
>
> Well, you're leaking the SPITupleTables (you should be doing
> SPI_freetuptable when done with each one), so running out of memory is
> not exactly surprising. I suspect what is happening is that an
> out-of-memory error is getting thrown and recovery from that is messed
> up somehow. Have you tried getting a stack trace from the crash?
>
> I note that you're apparently using C++. C++ in the backend is rather
> dangerous, and one of the main reasons is that C++ error handling
> doesn't play nice with elog/ereport error handling. It's possible to
> make it work safely but it takes a lot of attention and extra code,
> which you don't seem to have here.
could be c++ is throwing exception. if you haven't already, try
disabling exception handling completely in the compiler.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2013-03-04 16:30:11 | Re: sql_drop Event Trigger |
Previous Message | Alvaro Herrera | 2013-03-04 16:13:16 | Re: sql_drop Event Trigger |