From: | Bruce Guenter <bruceg(at)em(dot)ca> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Expectations of MEM requirements for a DB with large tables. |
Date: | 2000-11-06 05:34:19 |
Message-ID: | 20001105233419.A10018@em.ca |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Nov 05, 2000 at 09:17:52PM -0800, Michael Miyabara-McCaskey wrote:
> Anyway, I crashed my system the other day when I did a "select *" from one
> of my large tables (about 5.5gb in size). Now this is not something that
> will normally happen, as I would normally have some criteria to reduce the
> output size, but it got me thinking...
>
> Does anyone know what the ratio of data output size (say from a select) to
> the amount of RAM used is?
You are really asking two questions: how much memory does the back end
take to execute that query, and how much memory does the front end
(psql, I assume) take to receive the response.
To answer the first, the back-ends allocate a fixed pool of buffers when
they start up, and never use more RAM than is in that pool. If they
need more temporary space (ie for sorting), they will create temporary
files as necessary.
To answer the second, if you do a plain "SELECT *", it will buffer the
entire response set into RAM before printing anything out. If you have
more than a trivial number of records to fetch from the database (and
5.5GB is certainly more than trivial), use a cursor and only fetch a few
hundred at a time.
--
Bruce Guenter <bruceg(at)em(dot)ca> http://em.ca/~bruceg/
From | Date | Subject | |
---|---|---|---|
Next Message | KuroiNeko | 2000-11-06 05:54:11 | Re: Expectations of MEM requirements for a DB with large tables. |
Previous Message | Michael Miyabara-McCaskey | 2000-11-06 05:17:52 | Expectations of MEM requirements for a DB with large tables. |