From: | Denis Perchine <dyp(at)perchine(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Postgres eats up memory when using cursors |
Date: | 2001-03-01 15:40:45 |
Message-ID: | 01030121404505.00608@dyp.perchine.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thursday 01 March 2001 21:33, Tom Lane wrote:
> Denis Perchine <dyp(at)perchine(dot)com> writes:
> > I declare a cursor on the table of approx. 1 million rows.
> > And start fetching data by 1000 rows at each fetch.
> > Data processing can take quite a long time (3-4 days)
> > Theoretically postgres process should remain the same in size.
> > But it grows... In the end of 3rd day it becames 256Mb large!!!!
>
> Query details please? You can't expect any results from such a
> vague report.
:-)))
That's right.
declare senders_c cursor for select email, first_name, last_name from senders
order by email
fetch 1000 from senders_c
db=# explain declare senders_c cursor for select email, first_name, last_name
from senders order by email;
NOTICE: QUERY PLAN:
Index Scan using senders_email_key on senders (cost=0.00..197005.37
rows=928696 width=36)
db=# \d senders
Table "senders"
Attribute | Type | Modifier
------------+-----------+----------
email | text |
first_name | text |
last_name | text |
stamp | timestamp |
Index: senders_email_key
db=# \d senders_email_key
Index "senders_email_key"
Attribute | Type
-----------+------
email | text
unique btree
That's all. I could not imagine anything more simple...
--
Sincerely Yours,
Denis Perchine
----------------------------------
E-Mail: dyp(at)perchine(dot)com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------
From | Date | Subject | |
---|---|---|---|
Next Message | DaVinci | 2001-03-01 15:46:04 | JOIN of a table with many detail tables |
Previous Message | Tom Lane | 2001-03-01 15:40:30 | Re: Counting elements of an array |