Re: 600 million rows of data. Bad hardware or need partitioning?

From: Arya F <arya6000(at)gmail(dot)com>
To: Justin Pryzby <pryzby(at)telsasoft(dot)com>
Cc: Michael Lewis <mlewis(at)entrata(dot)com>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: 600 million rows of data. Bad hardware or need partitioning?
Date: 2020-05-06 00:31:29
Message-ID: CAFoK1ax7FnPx5vxQnMYysxLaC6BXBJRaEqpN6ZJDQQPCzBvNmA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:

> I mentioned in February and March that you should plan to set shared_buffers
> to fit the indexes currently being updated.
>

The following command gives me

select pg_size_pretty (pg_indexes_size('test_table'));
pg_size_pretty
----------------
5216 MB
(1 row)

So right now, the indexes on that table are taking about 5.2 GB, if a
machine has 512 GB of RAM and SSDs, is it safe to assume I can achieve
the same update that takes 1.5 minutes in less than 5 seconds while
having 600 million rows of data without partitioning?

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Justin Pryzby 2020-05-06 01:37:41 Re: 600 million rows of data. Bad hardware or need partitioning?
Previous Message Arya F 2020-05-06 00:15:14 Re: 600 million rows of data. Bad hardware or need partitioning?