From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "David M(dot) Kaplan" <david(dot)kaplan(at)ird(dot)fr> |
Cc: | Ryan Kelly <rpkelly22(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: problem with lost connection while running long PL/R query |
Date: | 2013-05-16 15:40:22 |
Message-ID: | 15339.1368718822@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"David M. Kaplan" <david(dot)kaplan(at)ird(dot)fr> writes:
> Thanks for the help. You have definitely identified the problem, but I
> am still looking for a solution that works for me. I tried setting
> vm.overcommit_memory=2, but this just made the query crash quicker than
> before, though without killing the entire connection to the database. I
> imagine that this means that I really am trying to use more memory than
> the system can handle?
> I am wondering if there is a way to tell postgresql to flush a set of
> table lines out to disk so that the memory they are using can be
> liberated.
Assuming you don't have work_mem set to something unreasonably large,
it seems likely that the excessive memory consumption is inside your
PL/R function, and not the fault of Postgres per se. You might try
asking in some R-related forums about how to reduce the code's memory
usage.
Also, if by "crash" this time you meant you got an "out of memory" error
from Postgres, there should be a memory map in the postmaster log
showing all the memory consumption Postgres itself is aware of. If that
doesn't add up to a lot, it would be pretty solid proof that the problem
is inside R. If there are any memory contexts that seem to have bloated
unreasonably, knowing which one(s) would be helpful information.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ramsey Gurley | 2013-05-16 16:33:14 | Re: Tuning read ahead |
Previous Message | Matt Brock | 2013-05-16 15:30:31 | Re: Deploying PostgreSQL on CentOS with SSD and Hardware RAID |