From: | Matthew Churcher <matthew(dot)churcher(at)realvnc(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Cost of initiating cursors |
Date: | 2013-04-11 14:40:32 |
Message-ID: | 799630713.60765.1365691232899.JavaMail.root@realvnc.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi All,
I have a question in regards to the cost of initiating a cursor (for loop) over a large number of rows (100,000+) and actually retrieving little or none of them.
For example:
FOR curr_foo
IN
SELECT foo FROM bar
WHERE wibble
ORDER BY wobble
LOOP
EXIT; -- always break out of loop
END LOOP;
For some reason this is hugely expensive and slow regardless of the selected execution plan and available indexes. The WHERE and particularly the ORDER BY clause appear to be highly significant despite having appropriate indexes in place.
It's the combination of the following behaviours I find particular perplexing:-
1.) Removing the WHERE and ORDER BY clauses results in a very fast query
2.) Adding a LIMIT clause also results in a very fast query.
This is perplexing because I don't see why ORDER BY should affect the cost of opening the cursor when indexes are in place but since it does why would LIMIT reduce the cost of ORDER BY as PostgreSQL would still need to order all of candidate records. This is all assuming the cursor isn't actually retrieving all the rows which is my understanding of how it should work. The configuration parameter 'cursor_tuple_fraction' is having no observable effect.
This is being seen on Postgres 9.1 (Ubuntu x64), on a server with fast disks and large amount of memory. Basic memory tuning has also been performed.
Thanks in advanced, I appreciate any insights.
Kind regards,
Matthew Churcher
From | Date | Subject | |
---|---|---|---|
Next Message | David Greco | 2013-04-11 14:51:37 | Streaming Replication 9.2 |
Previous Message | jpui | 2013-04-11 14:39:30 | Re: Update |