Re: Performance Optimisation - Identifying the correct DB

From: "Ravi Krishna" <srkrishna(at)myself(dot)com>
To: "Renjith Gk" <renjithgk(at)gmail(dot)com>, pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Performance Optimisation - Identifying the correct DB
Date: 2019-04-23 15:54:37
Message-ID: emcfc2fff3-8022-4d73-95be-f020f9a13171@ravis-macbook-pro.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

>
>What is the optimal execution time for Reading 200k records in
>Postgres. We had issues for reading records in cassandra which faced
>time out for ~200K records.
>
>any ideal solution or recommendation towards Postgres.

Cassandra is highly optimized for key based readings only.

Without knowing your application it is hard to predict how long will it
take to read 200K records.

Does the query to fetch 200K records use index.
Does it fetch a small portion or a large portion of total rows. If
later, then most likely PG will do table scan.
How wide is the table.
What is the timeout setting?

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Gary Orban 2019-04-24 17:56:35 Postgres 9.3.10 on Centos6
Previous Message Eshara Mondal 2019-04-23 15:13:45 RE: Performance Optimisation - Identifying the correct DB