Re: 600 million rows of data. Bad hardware or need partitioning?

From: Arya F <arya6000(at)gmail(dot)com>
To: Justin Pryzby <pryzby(at)telsasoft(dot)com>
Cc: Michael Lewis <mlewis(at)entrata(dot)com>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: 600 million rows of data. Bad hardware or need partitioning?
Date: 2020-05-10 04:10:20
Message-ID: CAFoK1ayPoLXGDHsco4=fr0podW48eg43LdWppOvb-bE==T4DRw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, May 5, 2020 at 9:37 PM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
>
> On Tue, May 05, 2020 at 08:31:29PM -0400, Arya F wrote:
> > On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> >
> > > I mentioned in February and March that you should plan to set shared_buffers
> > > to fit the indexes currently being updated.
> >
> > The following command gives me
> >
> > select pg_size_pretty (pg_indexes_size('test_table'));
> > pg_size_pretty > 5216 MB
> >
> > So right now, the indexes on that table are taking about 5.2 GB, if a
> > machine has 512 GB of RAM and SSDs, is it safe to assume I can achieve
> > the same update that takes 1.5 minutes in less than 5 seconds while
> > having 600 million rows of data without partitioning?
>
> I am not prepared to guarantee server performance..
>
> But, to my knowledge, you haven't configured shared_buffers at all. Which I
> think might be the single most important thing to configure for loading speed
> (with indexes).
>

Just wanted to give an update. I tried this on a VPS with 8GB ram and
SSDs, the same query now takes 1.2 seconds! What a huge difference!
that's without making any changes to postgres.conf file. Very
impressive.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message github kran 2020-05-12 15:40:25 Re: AutoVacuum and growing transaction XID's
Previous Message Michael Lewis 2020-05-08 21:11:04 Re: AutoVacuum and growing transaction XID's