From: | "Nigel J(dot) Andrews" <nandrews(at)investsystems(dot)co(dot)uk> |
---|---|
To: | Jakub Ouhrabka <jouh8664(at)ss1000(dot)ms(dot)mff(dot)cuni(dot)cz> |
Cc: | Valerie Schneider DSI/DEV <Valerie(dot)Schneider(at)meteo(dot)fr>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Memory usage / concept |
Date: | 2002-08-05 17:04:28 |
Message-ID: | Pine.LNX.4.21.0208051753540.3235-100000@ponder.fairway2k.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 5 Aug 2002, Jakub Ouhrabka wrote:
>
> > >i think it's your client (psql) what's actually crashing. it's attempting
> >
> > Yes it is. But without using a cursor (and even if this kind of query isn't
> > recommended) is there any solution to limit the use of the memory by PG itself
> > (as for Oracle) ?
>
What sort of error does Oracle report when you try to retrieve several GB of
data? Is this in it's 'command line' interface?
Basically what I'm asking about is what the Client/Server protocol does.
Presuming that all clients don't crash but abort the query then presumably the
client library is doing the abort when it fails to allocate a chunk of storage
during the recieve. I must admit to not knowing what libpq does but it
certainly doesn't trap this situation and return an error to the caller. Why
not? Other than the caller should be implementing sufficient intelligence in a
real application to avoid the situation.
--
Nigel J. Andrews
Director
---
Logictree Systems Limited
Computer Consultants
From | Date | Subject | |
---|---|---|---|
Next Message | Peter A. Daly | 2002-08-05 17:22:20 | Re: [HACKERS] []performance issues |
Previous Message | Bruce Momjian | 2002-08-05 16:33:59 | Re: O'Reilly Open Source Convention Report |