Re: Data grid: fetching/scrolling data on user demand

From: Dave Page <dpage(at)pgadmin(dot)org>
To: Tomek <tomek(at)apostata(dot)org>
Cc: pgAdmin Support <pgadmin-support(at)postgresql(dot)org>
Subject: Re: Data grid: fetching/scrolling data on user demand
Date: 2017-10-17 11:01:59
Message-ID: CA+OCxow-xtvPeF7i_6Bbkur=h3ore3zF9fQALP+PkZN8_v33PQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgadmin-support

On Tue, Oct 17, 2017 at 11:35 AM, Tomek <tomek(at)apostata(dot)org> wrote:

> Hi,
>
> >> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed,
> >
> > No they're not, though they are all transferred to the client which is
> why it's slower.
>
> They are not what?

The handling of rows in pgAdmin 3 is not as you described.

> What is slower - is the "display" part in both versions. You have data
> from server and than You push it to display.
> I've done quick test - table 650000 rows / 45 columns, query SELECT * from
> table limit 100000.
> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with
> ON_DEMAND_RECORD_COUNT = 100000 25 seconds...
> It is 20 seconds spent only on displaying...
>

So? No human can read that quickly.

>
> >> For me this idea of "load on demand" (which in reality is "display on
> demand") is pointless. It
> > is done only because the main lag of v4 comes from interface. I don't
> see any other purpose for
> > it... If You know (and You do) that v4 can't handle > big results add
> pagination like every other
> > webapp...
> >
> > We did that in the first beta, and users overwhelmingly said they didn't
> like or want pagination.
> >
> > What we have now gives users the interface they want, and presents the
> data to them quickly - far
> > more quickly than pgAdmin 3 ever did when working with larger resultsets.
> >
> > If that's pointless for you, then that's fine, but other users
> appreciate the speed and
> > responsiveness.
>
> I don't know of any users (we are the users) who are happy that selecting
> 10000 rows requires dragging scrollbar five times to see 5001 record...

> Saying pointless I meant that if I want 10000 rows I should get 10000
> rows, if I want to limit my data I'll use LIMIT. But if the ui can't handle
> big results just give me easiest/fastest way to get to may data.
>

Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the
way you work. Very few people scroll as you suggest - if you know you want
to see the 5001 record, it's common to use limit/offset. If you don't know,
then you're almost certainly going to scroll page by page or similar,
reading the results as you go - in which case, the batch loading will speed
things up for you as you'll have a much quicker "time to view first page".

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgadmin-support by date

  From Date Subject
Next Message Johann Spies 2017-10-17 12:16:59 Installation failure on Debian
Previous Message Edson Richter 2017-10-17 10:50:25 RES: Faded text in pgAdmin 4 2.0/Win Server 2008r2