From: | Arya F <arya6000(at)gmail(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Writing 1100 rows per second |
Date: | 2020-02-05 17:25:30 |
Message-ID: | CAFoK1az5D3SJqHd6oH8zmPFTGvgi-qVhjjs2fEhx5WL4ozF4SQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
If I run the database on a server that has enough ram to load all the
indexes and tables into ram. And then it would update the index on the HDD
every x seconds. Would that work to increase performance dramatically?
On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:
> > I'm looking to write about 1100 rows per second to tables up to 100
> million
> > rows. I'm trying to come up with a design that I can do all the writes
> to a
> > database with no indexes. When having indexes the write performance slows
> > down dramatically after the table gets bigger than 30 million rows.
> >
> > I was thinking of having a server dedicated for all the writes and have
> > another server for reads that has indexes and use logical replication to
> > update the read only server.
>
> Wouldn't the readonly server still have bad performance for all the wites
> being
> replicated to it ?
>
> > Would that work? Or any recommendations how I can achieve good
> performance
> > for a lot of writes?
>
> Can you use partitioning so the updates are mostly affecting only one
> table at
> once, and its indices are of reasonable size, such that they can fit
> easily in
> shared_buffers.
>
> brin indices may help for some, but likely not for all your indices.
>
> Justin
>
From | Date | Subject | |
---|---|---|---|
Next Message | Haroldo Kerry | 2020-02-05 17:46:58 | Re: Writing 1100 rows per second |
Previous Message | Justin Pryzby | 2020-02-05 17:15:49 | Re: Writing 1100 rows per second |