From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | Arya F <arya6000(at)gmail(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Writing 1100 rows per second |
Date: | 2020-02-05 17:12:34 |
Message-ID: | c5366307163012ac07b57aa412930c9d38a36006.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 2020-02-05 at 12:03 -0500, Arya F wrote:
> I'm looking to write about 1100 rows per second to tables up to 100 million rows. I'm trying to
> come up with a design that I can do all the writes to a database with no indexes. When having
> indexes the write performance slows down dramatically after the table gets bigger than 30 million rows.
>
> I was thinking of having a server dedicated for all the writes and have another server for reads
> that has indexes and use logical replication to update the read only server.
>
> Would that work? Or any recommendations how I can achieve good performance for a lot of writes?
Logical replication wouldn't make a difference, because with many indexes, replay of the
inserts would be slow as well, and replication would lag more and more.
No matter what you do, there will be no magic way to have your tables indexed and
have fast inserts at the same time.
One idea I can come up with is a table that is partitioned by a column that appears
in a selective search condition, but have no indexes on the table, so that you always get
away with a sequential scan of a single partition.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pryzby | 2020-02-05 17:15:49 | Re: Writing 1100 rows per second |
Previous Message | Arya F | 2020-02-05 17:03:52 | Writing 1100 rows per second |