From: | John R Pierce <pierce(at)hogranch(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Is there any method to limit resource usage in PG? |
Date: | 2013-08-28 04:13:00 |
Message-ID: | 521D78CC.50505@hogranch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 8/27/2013 6:49 PM, 高健 wrote:
> For a query and insert action,
> Firstly , the data is pull into private memory of the backend
> process which is service client.
if you're returning a single result of 3 million records, yes, you're
going to need memory to store that entire result set before you can do
anything with it.
again, if you're just writing this data into another table, why not do
it all in SQL ?
INSERT INTO newtable (field1,field2,field3,...) SELECT <your
complex 3 million row query here>;
that will do the whole thing without having to move any data into client
space. this will be faster and more memory efficient.
now, if your Java client HAS to process the data its selecting and do
complex stuff with it that you just don't think SQL can/should do, then
you'll need to use a CURSOR.
DECLARE cursorname CURSOR FOR SELECT <your messy query here>;
then loop on ...
FETCH 100 FROM cursorname;
to return the next 100 rows of this cursor. once you're done with it,
CLOSE cursorname;
will delete the cursor.
I suggest you read the specifics of these cursor commands to fully
understand them.
--
john r pierce 37N 122W
somewhere on the middle of the left coast
From | Date | Subject | |
---|---|---|---|
Next Message | Torello Querci | 2013-08-28 06:56:42 | Re: Problem creating index |
Previous Message | bricklen | 2013-08-28 03:53:06 | Re: OLAP |