Re: pgadmin4 slow with large tables compared to pgadmin3

From: Dave Page <dpage(at)pgadmin(dot)org>
To: Akshay Joshi <akshay(dot)joshi(at)enterprisedb(dot)com>
Cc: Colin Beckingham <colbec(at)kingston(dot)net>, pgadmin-hackers <pgadmin-hackers(at)postgresql(dot)org>
Subject: Re: pgadmin4 slow with large tables compared to pgadmin3
Date: 2016-06-14 09:10:52
Message-ID: CA+OCxowzUrfR_7gQy5j=DOcSke3oP+JkOCc20a7gQa=y9izaLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgadmin-hackers

Hi

2016-06-14 8:17 GMT+01:00 Akshay Joshi <akshay(dot)joshi(at)enterprisedb(dot)com>:
> Hi Dave
>
> 2016-06-13 21:47 GMT+05:30 Dave Page <dpage(at)pgadmin(dot)org>:
>>
>> On Mon, Jun 13, 2016 at 5:01 PM, Colin Beckingham <colbec(at)kingston(dot)net>
>> wrote:
>> > I have the latest fully patched pgadmin4. Runs fine on openSUSE Leap
>> > 42.1
>> > using browser Firefox.
>> > When I load a full large table such as the words table (140K records)
>> > from
>> > wordnet, pgadmin3 takes about 2 seconds to display.
>> > On pgadmin4 I wait about 30+ seconds and click through about 5 reminders
>> > that "a script has stopped working, do you want to continue."
>> > Eventually the table loads so this is not a bug report, more a question
>> > about how to streamline access to large tables. I am quite aware that it
>> > would run much faster by running a query with criteria asking for a
>> > subset
>> > of the table records, but just wondering if this is to be standard in
>> > pgadmin4. I can also disable the warnings, but this will prevent me from
>> > seeing issues with other scripts.
>>
>> Hmm, I tested this with a simple query, and got the crash below :-o.
>> Akshay, can you investigate please?
>
>
> I have tested the same query (SELECT * FROM pg_description a,
> pg_description b ) and it is crash with below error message:
>
> RuntimeError: maximum recursion depth exceeded
> Fatal Python error: Cannot recover from stack overflow.

Yeah, same as me.

> According to our logic we poll psycopg2 connection and check the status
> if it is busy(read/write) then call the same function recursively. So for
> the long running query Python throws an error. I have googled to increase
> the limit and found one function "sys.setrecursionlimit()" but we don't
> know what limit to set and also it is not recommended to increase the limit.

Calling anything recursively like that is doomed to failure.

> Then I have tried it with blocking call and I faced below error message:
> Error Message:out of memory for query result
>
> We need to change our logic of recursion, and for out of memory issue
> I'll have to figure out the solution.

Agreed. Thanks.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgadmin-hackers by date

  From Date Subject
Next Message Susan Douglas 2016-06-14 09:16:45 Patch for pgAdmin 4 docs
Previous Message Murtuza Zabuawala 2016-06-14 08:19:49 Re: PATCH: Added Statistics functionality for many nodes (pgAdmin4)