Re: handling out of memory conditions when fetching row descriptions

From: "'Isidor Zeuner'" <postgresql(at)quidecco(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: handling out of memory conditions when fetching row descriptions
Date: 2012-01-05 21:19:02
Message-ID: 20120105211902.7BFB28130CE@quidecco.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> "'Isidor Zeuner'" <postgresql(at)quidecco(dot)de> writes:
> > using the latest git source code, I found that libpq will let the
> > connection stall when getRowDescriptions breaks on an out of memory
> > condition. I think this should better be handled differently to allow
> > application code to handle such situations gracefully.
>
> The basic assumption in there is that if we wait and retry, eventually
> there will be enough memory.

I think the greatest problem with that approach is that there is no
(at least no documented) way for the application to find out that it
should be releasing memory.

> I agree that that's not ideal, since the
> application may not be releasing memory elsewhere. But what you propose
> doesn't seem like an improvement: you're converting a maybe-failure into
> a guaranteed-failure, and one that's much more difficult to recover from
> than an ordinary query error.
>

I think it was an improvement considering that it puts the application
code back in control. Given that the application has some way to
handle connection and query errors, it can do something reasonable
about the situation. Before, the application had no way to find out
there is a (maybe-)failure at all.

> Also, this patch breaks async operation, in which a failure return from
> getRowDescriptions normally means that we have to wait for more data
> to arrive. The test would really need to be inserted someplace else.
>
> In any case, getRowDescriptions is really an improbable place for an
> out-of-memory to occur: it would be much more likely to happen while
> absorbing the body of a large query result.

My assumption was that there is not much logic to handle such
situations because it is improbable.

I am currently using PostGreSQL under memory-constrained conditions,
so I might be getting back with more such cases if they surface.

> There already is some logic
> in getAnotherTuple for dealing with that case, which I suggest is a
> better model for what to do than "break the connection". But probably
> making things noticeably better here would require going through all
> the code to check for other out-of-memory cases, and developing some
> more uniform method of representing an already-known-failed query
> result. (For instance, it looks like getAnotherTuple might not work
> very well if it fails to get memory for one tuple and then succeeds
> on later ones. We probably ought to have some explicit state that
> says "we are absorbing the remaining data traffic for a query result
> that we already ran out of memory for".)
>

I like this approach. I changed the out-of-memory handling to switch
to a PGASYNC_MEMORY_FULL state, which will skip all messages until the
command is complete. Patch is attached.

Best regards,

Isidor Zeuner

Attachment Content-Type Size
row-description-oom.diff text/x-patch 2.4 KB

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Shawn Eckley 2012-01-05 22:50:46 Re: Need Help : PostgreSQL Installation on Windows 7 64 bit
Previous Message Merlin Moncure 2012-01-05 17:54:19 Re: Radial searches of cartesian points?