From: | Corey Huinker <corey(dot)huinker(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, mkellycs(at)gmail(dot)com, Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com> |
Subject: | Re: [POC] FETCH limited by bytes. |
Date: | 2015-12-24 23:31:42 |
Message-ID: | CADkLM=fh+ZUEykcCDu8P0PPrOyYwLEp5OBRjKCe5O7swqDF65w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Dec 23, 2015 at 3:08 PM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
> On 12/23/15 12:15 PM, Corey Huinker wrote:
>
>> That's fair. I'm still at a loss for how to show that the fetch size was
>> respected by the remote server, suggestions welcome!
>>
>
> A combination of repeat() and generate_series()?
>
> I'm guessing it's not that obvious and that I'm missing something; can you
> elaborate?
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>
I'll try. So the basic test of whether the FDW respected the fetch limit is
this:
1. create foreign server using postgres_fdw, create foreign table.
2. run a query against that table. it works. great.
3. alter server set fetch size option to 101 (or any number different from
100)
4. run same query against the table. the server side should show that the
result set was fetched in 101 row chunks[1].
5. alter table set fetch size option to 102 (or any number different from
100 and the number you picked in #3)
6. run same query against the table. the server side should show that the
result set was fetched in 102 row chunks[1].
The parts marked [1] are the problem...because the way I know it works is
looking at the query console on the remote redshift cluster where the query
column reads "FETCH 101 in c1" or somesuch rather than the query text.
That's great, *I* know it works, but I don't know how capture that
information from a vanilla postgres server, and I don't know if we can do
the regression with a loopback connection, or if we'd need to set up a
second pg instance for the regression test scaffolding.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2015-12-25 00:15:59 | Re: Comment typo in pg_dump.h |
Previous Message | Thomas Munro | 2015-12-24 22:21:29 | Re: Support for N synchronous standby servers - take 2 |