Re: PG Admin 4.28 is crashing while viewing 10M row data

From: Sachin Kumar <sachinkumaras(at)gmail(dot)com>
To: dbatoCloud Solution <dbatocloud17(at)gmail(dot)com>
Cc: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: PG Admin 4.28 is crashing while viewing 10M row data
Date: 2020-11-26 05:30:57
Message-ID: CALg-PKAmccgdfg20Fn2-oTL8S9cqvrRN1GUgd-5AasM+DHE+Lg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi Team,

After doing all these changes the DB services stopped and not started when
I tried manually. I have to roll back all these changes and reinstall
PGAdmin which wastes a lot of time for the team to fix these issues.

error statement :- could not connect to server: Connection refused
(0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1)
and accepting TCP/IP connections on port 5432?

Kindly share tried and tested solution

On Wed, Nov 25, 2020 at 7:22 AM dbatoCloud Solution <dbatocloud17(at)gmail(dot)com>
wrote:

> Hi Sachin - try the below steps and let me know
>
> Hi Sachin,
> Step 1:- Configure these below parameters before the bulk inserts then
> restart the server.
>
> shared_buffers = 2048MB (default: 32MB) - 25% of physical
> memory is 2GB ( You system size 8GB physical memory)
> effective_cache_size = 4096MB (default: 128MB) - I recommend to
> go for higher value , so that index scan will be used instead of sequential
> scan.
> checkpoint_segment = 32 (default: 3) - in 9.5
> checkpoint_segments but in Version 12 is max_size_size
> max_wal_size = 32 (default 1GB) - increase the
> value for heavy load, but FYI it will increase the recovery if incase
> system crashed.
> checkpoint_completion_target = 0.9 (default: 0.5) - the target of
> checkpoint completion time increase to 0.9
> default_statistics_target = 1000 (default: 100) - fraction of total
> time between checkpoints.
> work_mem = 200MB (default: 1MB)
> maintainance_work_mem = 1GB (default: 16MB) - this is for
> vacuum, creating index. since your table old table with index then i would
> recomment this parameter to be higher.
>
> step 2:-
>
> select name, setting from pg_settings where name = 'autovacuum' ;
>
> if the output says "on" then no need to vacuum your table else do the
> below
>
> VACUUM FULL hk_card_master_test ;
>
>
>
>
>
> Thanks & Best Wishes,
>
> Ashok
>
>
>
>
> ---------------------------------------------------------------------------------------------------------------------
>
> Ashokkumar Mani *(OCP12c/11g/10g/9i, AWS SAA, M103)*
>
> *Dubai** , UAE* *| BLR , INDIA*
>
> M: *+971 54 723 0075 *| *+91 90086 70302 | *WApp :* +91 81975 99922*
>
> W: https://dbatocloudsolution.blogspot.in/
> <http://dbatocloudsolution.blogspot.in/> | E: dbatocloud17(at)gmail(dot)com
>
>
> On Tue, Nov 24, 2020 at 4:08 PM Sachin Kumar <sachinkumaras(at)gmail(dot)com>
> wrote:
>
>> Hi Ashok,
>>
>> it's an old table where an index is already created.
>>
>> postgresql database version 12 and 13 1 @ each server.
>>
>> all settings are on the default configuration.
>>
>> server configuration.
>>
>> Operating system :- Windows (Windows Server 2016 Datacenter)
>> Size (2 vcpus, 8 GiB memory)
>>
>> On Mon, Nov 23, 2020 at 8:30 PM dbatoCloud Solution <
>> dbatocloud17(at)gmail(dot)com> wrote:
>>
>>> Hi Sachin,
>>> sorry for, delayed response,
>>>
>>> Why don't you run batch by batch?
>>>
>>> could you please confirm the below parameters values:- Since your doing
>>> bulk load I need the below details to recommend then I would suggest for a
>>> vacuum?
>>> hk_card_master_test > has it created newly or old table
>>> with index was already existed? please confirm?
>>> postgresql database version ?
>>> shared_buffers = ? (default: 32MB)
>>> effective_cache_size = ? (default: 128MB)
>>> checkpoint_segment = ? (default: 3)
>>> checkpoint_completion_target = ? (default: 0.5)
>>> default_statistics_target = ? (default: 100)
>>> work_mem = ? (default: 1MB)
>>> maintainance_work_mem = ? (default: 16MB)
>>>
>>>
>>>
>>>
>>> Thanks & Best Wishes,
>>>
>>> Ashok
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------------------------
>>>
>>> Ashokkumar Mani *(OCP12c/11g/10g/9i, AWS SAA, M103)*
>>>
>>> *Dubai** , UAE* *| BLR , INDIA*
>>>
>>> M: *+971 54 723 0075 *| *+91 90086 70302 | *WApp :* +91 81975 99922*
>>>
>>> W: https://dbatocloudsolution.blogspot.in/
>>> <http://dbatocloudsolution.blogspot.in/> | E: dbatocloud17(at)gmail(dot)com
>>>
>>>
>>> On Mon, Nov 23, 2020 at 10:36 AM Sachin Kumar <sachinkumaras(at)gmail(dot)com>
>>> wrote:
>>>
>>>> HI Ashok,
>>>>
>>>> Please find the command i am using to upload 10 million data into a
>>>> table.
>>>>
>>>> *copy command to update records*
>>>> copy hk_card_master_test from
>>>> 'C:/inetpub/wwwroot/cards/media/static/rawsheet_001-000.csv' with delimiter
>>>> E'\t' csv header encoding 'ISO_8859_5'
>>>>
>>>> I am not using the Vacuum command during the update? NO, I am not
>>>> Vacuuming this large table.
>>>> I am using the analyze command for other queries.
>>>>
>>>> I am not aware of the Vacuum command functionality. Please if you can
>>>> help in understanding Vacuum query.
>>>>
>>>>
>>>> On Mon, Nov 23, 2020 at 10:23 AM dbatoCloud Solution <
>>>> dbatocloud17(at)gmail(dot)com> wrote:
>>>>
>>>>> Hi Sachin,
>>>>>
>>>>> Let me know-how did you give a commit statement during update? Did you
>>>>> try use Vacuum and analyze command during update? are you
>>>>> frequently Vacuuming this table large table .
>>>>>
>>>>> if the postgresql is running on Linux then run it by batch with
>>>>> intermediate commit.
>>>>>
>>>>>
>>>>>
>>>>> Thanks & Best Wishes,
>>>>>
>>>>> Ashok
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Thanks & Best Wishes,
>>>>>
>>>>> Ashok
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------------------------------------------------------
>>>>>
>>>>> Ashokkumar Mani *(OCP12c/11g/10g/9i, AWS SAA, M103)*
>>>>>
>>>>> *Dubai** , UAE* *| BLR , INDIA*
>>>>>
>>>>> M: *+971 54 723 0075 *| *+91 90086 70302 | *WApp :* +91 81975 99922*
>>>>>
>>>>> W: https://dbatocloudsolution.blogspot.in/
>>>>> <http://dbatocloudsolution.blogspot.in/> | E: dbatocloud17(at)gmail(dot)com
>>>>>
>>>>>
>>>>> On Sun, Nov 22, 2020 at 10:25 PM David G. Johnston <
>>>>> david(dot)g(dot)johnston(at)gmail(dot)com> wrote:
>>>>>
>>>>>>
>>>>>> On Sunday, November 22, 2020, Sachin Kumar <sachinkumaras(at)gmail(dot)com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Expert,
>>>>>>> I am updating 10 Million records using COPY command in a table, the
>>>>>>> table is updated in a minute, but when I am trying to view all row. PGadmin
>>>>>>> 4.28 which I am using in chrome is getting crashed. I tried it 4 times and
>>>>>>> it crashed 3 times. After viewing this 10M row the other queries are very
>>>>>>> running slow.
>>>>>>> is there any process to clear the buffer in PGAdmin or process
>>>>>>> through which the quey in PGAdmin runs at its normal speed?
>>>>>>> Please guide me on how to fix this issue.
>>>>>>>
>>>>>>>
>>>>>> Stop and restart the program?
>>>>>>
>>>>>> What do you hope to do with 10M full records displayed on screen
>>>>>> (even if pagination is involved...)?
>>>>>>
>>>>>> David J.
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> Best Regards,
>>>> Sachin Kumar
>>>>
>>>
>>
>> --
>>
>> Best Regards,
>> Sachin Kumar
>>
>

--

Best Regards,
Sachin Kumar

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Nikhil Shetty 2020-11-26 06:32:41 Re: Streaming Replication replay lag
Previous Message Laurenz Albe 2020-11-25 14:51:38 Re: Streaming Replication replay lag