I had these same issues with the PeerDirect version also.
"Hannu Krosing" <hannu(at)tm(dot)ee> wrote in message
news:1062693009(dot)6174(dot)21(dot)camel(at)fuji(dot)krosing(dot)net(dot)(dot)(dot)
> Relaxin kirjutas N, 04.09.2003 kell 17:35:
> > So after you did that, where able to position to ANY record within the
> > resultset?
> >
> > Ex. Position 100,000; then to Position 5; then to position 50,000,
etc...
>
> not in the case of :
> time psql test100k -c 'select * from test' > /dev/null
> as the whole result would be written to dev null (i.e discarded)
>
> Yes in case of python: after doing
>
> res = con.query('select * from test') # 3 sec - perform query
> list = res.getresult() # 1 sec - construct list of tuples
>
> the whole 128k records are in a python list ,
> so that i can immediately access any record by python list syntax,
> ie list[5], list[50000] etc.
>
> > If you are able to do that and have your positioned row available to you
> > immediately, then I'll believe that it's the ODBC driver.
>
> It can also be the Cygwin port, which is known to have several problems,
> and if you run both your client and server on the same machine, then it
> can also be an interaction of the two processes (cygwin/pgsql server and
> native win32 ODBC client) not playing together very well.
>
> > "Hannu Krosing" <hannu(at)tm(dot)ee> wrote in message
> > news:1062673303(dot)5200(dot)135(dot)camel(at)fuji(dot)krosing(dot)net(dot)(dot)(dot)
> > > Relaxin kirjutas N, 04.09.2003 kell 03:28:
> > > > I have a table with 102,384 records in it, each record is 934 bytes.
> > >
> > > I created a test database on my Linux (RH9) laptop with 30GB/4200RPM
ide
> > > drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930
bytes
> > > each and did
> > >
> > > [hannu(at)fuji hannu]$ time psql test100k -c 'select * from test' >
> > > /dev/null
> > >
> > > real 0m3.970s
> > > user 0m0.980s
> > > sys 0m0.570s
> > >
> > > so it seems definitely not a problem with postgres as such, but
perhaps
> > > with Cygwin and/or ODBC driver
> > >
> > > I also ran the same query using the "standard" pg adapter:
> > >
> > > >>> import pg, time
> > > >>>
> > > >>> con = pg.connect('test100k')
> > > >>>
> > > >>> def getall():
> > > ... t1 = time.time()
> > > ... res = con.query('select * from test')
> > > ... t2 = time.time()
> > > ... list = res.getresult()
> > > ... t3 = time.time()
> > > ... print t2 - t1, t3-t2
> > > ...
> > > >>> getall()
> > > 3.27637195587 1.10105705261
> > > >>> getall()
> > > 3.07413101196 0.996125936508
> > > >>> getall()
> > > 3.03377199173 1.07322502136
> > >
> > > which gave similar results
> -------------------
> Hannu
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend
>