Re: PG Admin 4.28 is crashing while viewing 10M row data

From: Sachin Kumar <sachinkumaras(at)gmail(dot)com>
To: dbatoCloud Solution <dbatocloud17(at)gmail(dot)com>
Cc: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: PG Admin 4.28 is crashing while viewing 10M row data
Date: 2020-11-24 10:37:32
Message-ID: CALg-PKAgMtMHJSBFmHWN2gRYJ_oQctmwN7fg6MkqWpeQ=immxg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi Ashok,

it's an old table where an index is already created.

postgresql database version 12 and 13 1 @ each server.

all settings are on the default configuration.

server configuration.

Operating system :- Windows (Windows Server 2016 Datacenter)
Size (2 vcpus, 8 GiB memory)

On Mon, Nov 23, 2020 at 8:30 PM dbatoCloud Solution <dbatocloud17(at)gmail(dot)com>
wrote:

> Hi Sachin,
> sorry for, delayed response,
>
> Why don't you run batch by batch?
>
> could you please confirm the below parameters values:- Since your doing
> bulk load I need the below details to recommend then I would suggest for a
> vacuum?
> hk_card_master_test > has it created newly or old table with
> index was already existed? please confirm?
> postgresql database version ?
> shared_buffers = ? (default: 32MB)
> effective_cache_size = ? (default: 128MB)
> checkpoint_segment = ? (default: 3)
> checkpoint_completion_target = ? (default: 0.5)
> default_statistics_target = ? (default: 100)
> work_mem = ? (default: 1MB)
> maintainance_work_mem = ? (default: 16MB)
>
>
>
>
> Thanks & Best Wishes,
>
> Ashok
>
>
>
>
> ---------------------------------------------------------------------------------------------------------------------
>
> Ashokkumar Mani *(OCP12c/11g/10g/9i, AWS SAA, M103)*
>
> *Dubai** , UAE* *| BLR , INDIA*
>
> M: *+971 54 723 0075 *| *+91 90086 70302 | *WApp :* +91 81975 99922*
>
> W: https://dbatocloudsolution.blogspot.in/
> <http://dbatocloudsolution.blogspot.in/> | E: dbatocloud17(at)gmail(dot)com
>
>
> On Mon, Nov 23, 2020 at 10:36 AM Sachin Kumar <sachinkumaras(at)gmail(dot)com>
> wrote:
>
>> HI Ashok,
>>
>> Please find the command i am using to upload 10 million data into a table.
>>
>> *copy command to update records*
>> copy hk_card_master_test from
>> 'C:/inetpub/wwwroot/cards/media/static/rawsheet_001-000.csv' with delimiter
>> E'\t' csv header encoding 'ISO_8859_5'
>>
>> I am not using the Vacuum command during the update? NO, I am not
>> Vacuuming this large table.
>> I am using the analyze command for other queries.
>>
>> I am not aware of the Vacuum command functionality. Please if you can
>> help in understanding Vacuum query.
>>
>>
>> On Mon, Nov 23, 2020 at 10:23 AM dbatoCloud Solution <
>> dbatocloud17(at)gmail(dot)com> wrote:
>>
>>> Hi Sachin,
>>>
>>> Let me know-how did you give a commit statement during update? Did you
>>> try use Vacuum and analyze command during update? are you
>>> frequently Vacuuming this table large table .
>>>
>>> if the postgresql is running on Linux then run it by batch with
>>> intermediate commit.
>>>
>>>
>>>
>>> Thanks & Best Wishes,
>>>
>>> Ashok
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks & Best Wishes,
>>>
>>> Ashok
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------------------------
>>>
>>> Ashokkumar Mani *(OCP12c/11g/10g/9i, AWS SAA, M103)*
>>>
>>> *Dubai** , UAE* *| BLR , INDIA*
>>>
>>> M: *+971 54 723 0075 *| *+91 90086 70302 | *WApp :* +91 81975 99922*
>>>
>>> W: https://dbatocloudsolution.blogspot.in/
>>> <http://dbatocloudsolution.blogspot.in/> | E: dbatocloud17(at)gmail(dot)com
>>>
>>>
>>> On Sun, Nov 22, 2020 at 10:25 PM David G. Johnston <
>>> david(dot)g(dot)johnston(at)gmail(dot)com> wrote:
>>>
>>>>
>>>> On Sunday, November 22, 2020, Sachin Kumar <sachinkumaras(at)gmail(dot)com>
>>>> wrote:
>>>>
>>>>> Hi Expert,
>>>>> I am updating 10 Million records using COPY command in a table, the
>>>>> table is updated in a minute, but when I am trying to view all row. PGadmin
>>>>> 4.28 which I am using in chrome is getting crashed. I tried it 4 times and
>>>>> it crashed 3 times. After viewing this 10M row the other queries are very
>>>>> running slow.
>>>>> is there any process to clear the buffer in PGAdmin or process through
>>>>> which the quey in PGAdmin runs at its normal speed?
>>>>> Please guide me on how to fix this issue.
>>>>>
>>>>>
>>>> Stop and restart the program?
>>>>
>>>> What do you hope to do with 10M full records displayed on screen (even
>>>> if pagination is involved...)?
>>>>
>>>> David J.
>>>>
>>>>
>>>
>>
>> --
>>
>> Best Regards,
>> Sachin Kumar
>>
>

--

Best Regards,
Sachin Kumar

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Yambu 2020-11-24 10:53:42 Re-index
Previous Message Holger Jakobs 2020-11-24 07:57:22 Re: ERROR : invalid transaction termination : PostgreSQL v12