From: | Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)oss(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Marko Kreen <markokr(at)gmail(dot)com>, greg(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org, mmoncure(at)gmail(dot)com, shigeru(dot)hanada(at)gmail(dot)com |
Subject: | Re: Speed dblink using alternate libpq tuple storage |
Date: | 2012-04-04 17:28:41 |
Message-ID: | CAM103DtrZUskPo7Au3PDsSer7SQ2F8VGx=0DgQfFUdSzo5ckew@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello, This is the new version of dblink patch.
- Calling dblink_is_busy prevents row processor from being used.
- some PGresult leak fixed.
- Rebased to current head.
> A hack on top of that hack would be to collect the data into a
> tuplestore that contains all text columns, and then convert to the
> correct rowtype during dblink_get_result, but that seems rather ugly
> and not terribly high-performance.
>
> What I'm currently thinking we should do is just use the old method
> for async queries, and only optimize the synchronous case.
Ok, I agree with you except for performance issue. I give up to use
row processor for async query with dblink_is_busy called.
> I thought for awhile that this might represent a generic deficiency
> in the whole concept of a row processor, but probably it's mostly
> down to dblink's rather bizarre API. It would be unusual I think for
> people to want a row processor that couldn't know what to do until
> after the entire query result is received.
I hope so. Thank you.
regards,
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachment | Content-Type | Size |
---|---|---|
dblink_rowproc_20120405.patch | application/octet-stream | 28.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Joachim Wieland | 2012-04-04 18:43:53 | Re: parallel pg_dump |
Previous Message | Alvaro Herrera | 2012-04-04 17:25:11 | Re: patch: improve SLRU replacement algorithm |