From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Justin Foster <jfoster(at)corder-eng(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Memory Leak |
Date: | 2000-11-05 04:09:00 |
Message-ID: | 29749.973397340@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Justin Foster <jfoster(at)corder-eng(dot)com> writes:
> I am running a test which performs 1000 transactions of 1000 updates
> of a single column in a single table, or (1 tranaction = 1000 updates)
> * 1000. I have no indecies for any of the columns and the table has 3
> columns and 200 records. I do a VACUUM ANALYZE after every
> transaction. A single transaction takes about 3-6 seconds.
> It appears that RAM decreases at about 10 to 100K a second until it is
> all gone.
When you say "RAM decreases", do you mean that the process size of the
backend is growing?
We have some known problems with memory leakage during a query
(hopefully 7.1 will solve this), but I'm not aware of any problems
that would cause leakage that accumulates across queries --- at least
not for such a simple case as you describe. Normally, all memory used
during a query is freed at query end, so the test you describe ought
to run in a static backend process size.
Could we see the exact query/queries you are running, and the full
definition of the table?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | hubert depesz lubaczewski | 2000-11-05 11:42:04 | Question about ordering views |
Previous Message | Hannu Krosing | 2000-11-04 20:10:49 | Re: [HACKERS] OSDN Database conference report (long) |