From: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Joe Conway <mail(at)joeconway(dot)com> |
Subject: | Re: Faster methods for getting SPI results |
Date: | 2017-03-02 16:03:44 |
Message-ID: | 7249067c-82a9-5807-abed-ffd95c812d7c@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/20/16 23:14, Jim Nasby wrote:
> I've been looking at the performance of SPI calls within plpython.
> There's a roughly 1.5x difference from equivalent python code just in
> pulling data out of the SPI tuplestore. Some of that is due to an
> inefficiency in how plpython is creating result dictionaries, but fixing
> that is ultimately a dead-end: if you're dealing with a lot of results
> in python, you want a tuple of arrays, not an array of tuples.
There is nothing that requires us to materialize the results into an
actual list of actual rows. We could wrap the SPI_tuptable into a
Python object and implement __getitem__ or __iter__ to emulate sequence
or mapping access.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2017-03-02 16:10:30 | Re: Backport of pg_statistics typos fix |
Previous Message | David Steele | 2017-03-02 15:25:40 | Re: Speedup twophase transactions |