From: | Gavin Sherry <swm(at)alcove(dot)com(dot)au> |
---|---|
To: | Dustin Sallings <dustin(at)spy(dot)net> |
Cc: | TTK Ciar <ttk2(at)hardpoint(dot)ciar(dot)org>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: psql large RSS (1.6GB) |
Date: | 2004-11-01 13:45:07 |
Message-ID: | Pine.LNX.4.58.0411020042050.5412@linuxworld.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sat, 30 Oct 2004, Dustin Sallings wrote:
> > If the solution is to just write a little client that uses perl
> > DBI to fetch rows one at a time and write them out, that's doable,
> > but it would be nice if psql could be made to "just work" without
> > the monster RSS.
>
> It wouldn't make a difference unless that driver implements the
> underlying protocol on its own.
Even though we can tell people to make use of cursors, it seems that
memory usage for large result sets should be addressed. A quick search of
the archives does not reveal any discussion about having libpq spill to
disk if a result set reaches some threshold. Has this been canvassed in
the past?
Thanks,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2004-11-01 14:04:44 | Re: psql large RSS (1.6GB) |
Previous Message | D'Arcy J.M. Cain | 2004-11-01 12:59:49 | Re: Thanks Chariot Solutions |