From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Seref Arikan <serefarikan(at)kurumsalteknoloji(dot)com> |
Cc: | PG-General Mailing List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Suggestions for the best strategy to emulate returning multiple sets of results |
Date: | 2012-10-10 13:55:16 |
Message-ID: | CAHyXU0z3DUMwfSPOybUKCph+iQPLb8PQjWeSAo4fcy5pj2fgkA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 10, 2012 at 8:27 AM, Seref Arikan
<serefarikan(at)kurumsalteknoloji(dot)com> wrote:
> Hi Merlin,
> Thanks for the response. At the moment, the main function is creating two
> temp tables that drops on commit, and python functions fills these. Not too
> bad, but I'd like to push these temp tables to ram, which is a bit tricky
> due to not having a direct method of doing this with postgresql. (a topic
> that has been discussed in the past in this mail group)
>
> The global variable idea is interesting though. I have not encountered this
> before, is it the global dictionary SD/GD mentioned here:
> http://www.postgresql.org/docs/9.0/static/plpython-sharing.html ?
> It may help perform the expensive transformations once and reuse the
> results.
yeah. maybe though you might find that the overhead of temp tables is
already pretty good -- they are mostly ram based in typical usage as
they aren't synced. I find actually the greatest overhead in terms of
using them is creation and dropping -- so for very low latency
transactions I use a unlogged permanent table with value returned by
txid_current() as the leading field in the key.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Seref Arikan | 2012-10-10 14:16:22 | Re: Suggestions for the best strategy to emulate returning multiple sets of results |
Previous Message | Merlin Moncure | 2012-10-10 13:46:13 | Re: moving from MySQL to pgsql |