From: | Andrew Nosenko <awn(at)bcs(dot)zp(dot)ua> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Memory Usage |
Date: | 2000-12-11 17:12:11 |
Message-ID: | 20001211191211.A21102@bcs.zp.ua |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane wrote:
: Nathan Barnett <nbarnett(at)centuries(dot)com> writes:
: > UPDATE pages SET createdtime = NOW();
:
: > Is there a reason why this would take up all of the memory??
:
: The now() function invocation leaks memory ... only a dozen or so bytes
: per invocation, but that adds up over millions of rows :-(. In 7.0.*
: the memory isn't recovered until end of statement. 7.1 fixes this by
: recovering temporary memory after each tuple.
As I can see this is not that simple :-(
On UPDATE -- maybe, but not on SELECT.
When SELECT is executing Postgres (7.0.3) allocate how many memory as
need for store full result set of query. On
select * from some_big_table;
this can be in some times more than "physical memory + swap" exist. :-(
In general case I can't disable executing of this (and similar) queries
for users.
Question: Can I say postmaster (or postgres backend) don't use more than
some number of memory (in per-backend basis or for all running backends
totally -- no difference) and when this limit will be exceed -- switch
to using temporary files or simple rollback transaction and close
connection if using temporary files is impossible? (Yes, I mean what
bring down one postrgres process is more cheap that bring down or hang
up all machine.)
Any ideas/workarounds?
--
Andrew W. Nosenko (awn(at)bcs(dot)zp(dot)ua)
From | Date | Subject | |
---|---|---|---|
Next Message | Philip Hallstrom | 2000-12-11 17:16:25 | Article in "Web Techniques" (Jan-2001)... |
Previous Message | Peter Eisentraut | 2000-12-11 16:50:56 | RE: Re: Re: Why PostgreSQL is not that popular as MySQL ? |