Re: Writing 1100 rows per second

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Arya F <arya6000(at)gmail(dot)com>
Cc: Justin Pryzby <pryzby(at)telsasoft(dot)com>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Writing 1100 rows per second
Date: 2020-02-09 19:30:32
Message-ID: CAMkU=1zJ3dMLfvNYerY6iBmJcTKKcPu5GR2BZxhYpd_BdMP6Yg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Feb 5, 2020 at 12:25 PM Arya F <arya6000(at)gmail(dot)com> wrote:

> If I run the database on a server that has enough ram to load all the
> indexes and tables into ram. And then it would update the index on the HDD
> every x seconds. Would that work to increase performance dramatically?
>

Perhaps. Probably not dramatically though. If x seconds (called a
checkpoint) is not long enough for the entire index to have been dirtied,
then my finding is that writing half of the pages (randomly interspersed)
of a file, even in block order, still has the horrid performance of a long
sequence of random writes, not the much better performance of a handful of
sequential writes. Although this probably depends strongly on your RAID
controller and OS version and such, so you should try it for yourself on
your own hardware.

Cheers,

Jeff

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Asya Nevra Buyuksoy 2020-02-10 07:52:21 Fwd: TOAST table performance problem
Previous Message Andreas Joseph Krogh 2020-02-07 15:15:48 Re: TOAST table performance problem