Re: [pgAdmin4][patch][runtime]: RM#2829, RM#2491 - pgAdmin4 crashes while saving CSV data from Query tool

From: Dave Page <dpage(at)pgadmin(dot)org>
To: Neel Patel <neel(dot)patel(at)enterprisedb(dot)com>
Cc: pgadmin-hackers <pgadmin-hackers(at)postgresql(dot)org>
Subject: Re: [pgAdmin4][patch][runtime]: RM#2829, RM#2491 - pgAdmin4 crashes while saving CSV data from Query tool
Date: 2017-11-20 14:41:36
Message-ID: CA+OCxoxM6T3m9zQ=Q+D6G0ALdMhYRFgGbj4dBjBWD-c4HUXtkA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgadmin-hackers

Hi

On Mon, Nov 20, 2017 at 7:06 AM, Neel Patel <neel(dot)patel(at)enterprisedb(dot)com>
wrote:

> Hi Dave,
>
> Ignore above debug patch.
>
> Can you please test this patch ? From the hang screen prospective, i have
> added how many bytes downloaded to message as browser so that user can know
> when download in progress with no total data.
>
> Let me know if you still face hang or other issue.
>

Oddly - that seems to be working. I'll commit it, but let's ensure Fahar
gives it a decent amount of testing.

Thanks!

>
> Thanks,
> Neel Patel
>
> On Fri, Nov 17, 2017 at 6:44 PM, Neel Patel <neel(dot)patel(at)enterprisedb(dot)com>
> wrote:
>
>> Hi Dave,
>>
>> I have tried for 350K rows with query given by you but not able to
>> reproduce the hang issue.
>> I have put some debug statement in Qt Signal's handler. Can you please
>> apply this debug patch and provide me console output if possible ?
>>
>> Thanks,
>> Neel Patel
>>
>> On Fri, Nov 17, 2017 at 3:13 PM, Neel Patel <neel(dot)patel(at)enterprisedb(dot)com>
>> wrote:
>>
>>> Hi Dave,
>>>
>>> On Fri, Nov 17, 2017 at 3:09 PM, Dave Page <dpage(at)pgadmin(dot)org> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Nov 17, 2017 at 9:36 AM, Neel Patel <
>>>> neel(dot)patel(at)enterprisedb(dot)com> wrote:
>>>>
>>>>> Hi Dave,
>>>>>
>>>>> On Fri, Nov 17, 2017 at 2:42 PM, Dave Page <dpage(at)pgadmin(dot)org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Nov 17, 2017 at 8:45 AM, Neel Patel <
>>>>>> neel(dot)patel(at)enterprisedb(dot)com> wrote:
>>>>>>
>>>>>>> Hi Dave,
>>>>>>>
>>>>>>> On Thu, Nov 16, 2017 at 8:13 PM, Dave Page <dpage(at)pgadmin(dot)org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> On Thu, Nov 16, 2017 at 1:47 PM, Neel Patel <
>>>>>>>> neel(dot)patel(at)enterprisedb(dot)com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Adding information.
>>>>>>>>>
>>>>>>>>> With this patch, RM#2715 should also be resolved.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Neel Patel
>>>>>>>>>
>>>>>>>>> On Thu, Nov 16, 2017 at 7:01 PM, Neel Patel <
>>>>>>>>> neel(dot)patel(at)enterprisedb(dot)com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I am able to reproduce the crash while downloading and save data
>>>>>>>>>> in CSV file from query tool.
>>>>>>>>>>
>>>>>>>>>> Please find attached updated patch with below changes after
>>>>>>>>>> reading Qt documentation.
>>>>>>>>>>
>>>>>>>>>> - Added new signal "readyRead". As per the Qt documentation,
>>>>>>>>>> this signal will be emitted when data is ready from IO channel for large
>>>>>>>>>> amount of data transfer between server and client.
>>>>>>>>>> - Ready read and DownloadInProgress signal is very quick in
>>>>>>>>>> call so we should not do large operation inside that slot because for
>>>>>>>>>> downloading big data it may possible of frequent call of those signals
>>>>>>>>>> which may cause the crash or missing data to write inside the file so
>>>>>>>>>> removed unnecessary logic from that slot.
>>>>>>>>>> - Fixed the crash while opening IODevice with NULL handle.
>>>>>>>>>>
>>>>>>>>>> With above changes, I have tested with same data as earlier and
>>>>>>>>>> it is working as expected without crashing the application.
>>>>>>>>>>
>>>>>>>>>> Do review it and let me know for comments.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> My first test was on Mac using Qt 5.8 with webkit, and when I
>>>>>>>> attempted to download the results from a 300K row query, it basically hung
>>>>>>>> showing the Downloading File progress indicator. It let me cancel it and
>>>>>>>> carried on working, but hung again the next time I tried the CSV download.
>>>>>>>> Each time it seems to download some data - from the same query I've seen
>>>>>>>> 2.8MB, 5.1MB and 1.5MB.
>>>>>>>>
>>>>>>>
>>>>>>> We are using "*downloadProgress*" signal to update the progress
>>>>>>> bar and here we are getting "readData" as bytes but "totalData" is unknown
>>>>>>> so we received "totalData" as -1 because Qt doesn't know the total size of
>>>>>>> the file we are getting from server but when we open "pgAdmin4" website in
>>>>>>> another link and try to download some binary or source at that time we are
>>>>>>> getting "totalData" as known size and we are displaying the progress bar
>>>>>>> correctly.
>>>>>>>
>>>>>>> Here, Qt sending the signal "downloadProgress" with unknown
>>>>>>> totalData size as -1 so we update the progress bar with 100% so from user
>>>>>>> point of view it looks like hang but it is downloading and writing to file
>>>>>>> at back ground. if we apply the same patch in windows then it display
>>>>>>> running progress bar so user can know something happening at back end side.
>>>>>>>
>>>>>>> I think it is same as browser - if we download big size file from
>>>>>>> server then it keep on downloading the file without displaying total size.
>>>>>>> What should we do here ? Should we create one custom progress bar dialog
>>>>>>> with moving cursor or something other so that user can know that activity
>>>>>>> is doing at backend ? Thoughts ?
>>>>>>>
>>>>>>
>>>>>> It is misleading, but it *is* also hanging. 10+ minutes to download a
>>>>>> few megs of data from a server on the same machine is not just a misleading
>>>>>> progress indicator :-)
>>>>>>
>>>>> Can you please share some table definition with no of rows with ~table
>>>>> size ? I have tried but not able to reproduce the issue.
>>>>>
>>>>
>>>> It was simply (on PG 9.4):
>>>>
>>>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>>>> UNION ALL
>>>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>>>> UNION ALL
>>>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>>>> UNION ALL
>>>> ...
>>>>
>>>> Unioned enough times to get a shade under 300K rows.
>>>>
>>>
>>> Thanks. I will check and keep you updated.
>>>
>>>>
>>>> --
>>>> Dave Page
>>>> Blog: http://pgsnake.blogspot.com
>>>> Twitter: @pgsnake
>>>>
>>>> EnterpriseDB UK: http://www.enterprisedb.com
>>>> The Enterprise PostgreSQL Company
>>>>
>>>
>>>
>>
>

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgadmin-hackers by date

  From Date Subject
Next Message Dave Page 2017-11-20 14:43:38 pgAdmin 4 commit: Ensure we can download large files and keep the user
Previous Message Dave Page 2017-11-20 14:35:35 Re: [pgAdmin4][Patch]: RM #2781 - New option to set the quotation mark for copying to clipboard.