From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Arya F <arya6000(at)gmail(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Writing 1100 rows per second |
Date: | 2020-02-05 17:15:49 |
Message-ID: | 20200205171548.GB403@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:
> I'm looking to write about 1100 rows per second to tables up to 100 million
> rows. I'm trying to come up with a design that I can do all the writes to a
> database with no indexes. When having indexes the write performance slows
> down dramatically after the table gets bigger than 30 million rows.
>
> I was thinking of having a server dedicated for all the writes and have
> another server for reads that has indexes and use logical replication to
> update the read only server.
Wouldn't the readonly server still have bad performance for all the wites being
replicated to it ?
> Would that work? Or any recommendations how I can achieve good performance
> for a lot of writes?
Can you use partitioning so the updates are mostly affecting only one table at
once, and its indices are of reasonable size, such that they can fit easily in
shared_buffers.
brin indices may help for some, but likely not for all your indices.
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Arya F | 2020-02-05 17:25:30 | Re: Writing 1100 rows per second |
Previous Message | Laurenz Albe | 2020-02-05 17:12:34 | Re: Writing 1100 rows per second |