From: | Jean Baro <jfbaro(at)gmail(dot)com> |
---|---|
To: | MichaelDBA <MichaelDBA(at)sqlexec(dot)com> |
Cc: | Rick Otten <rottenwindfish(at)gmail(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: High concurrency same row (inventory) |
Date: | 2019-07-29 18:09:08 |
Message-ID: | CA+fQeem1JjONE4Czj27QGwC09FWwsdpoZsp4JY2POd3iA5AFWA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks guys! This is really good and useful information! :)
During the day we can see some exceptions coming from Postgres (alway when
the load is the highest), only in the MAIN UPDATE:
- How to overcome the error "current transaction is aborted, commands
ignored until end of transaction block"
- Deadlock detected
Thanks
On Mon, Jul 29, 2019 at 9:55 AM MichaelDBA <MichaelDBA(at)sqlexec(dot)com> wrote:
> Does pg_stat_user_tables validate that the major updates are indeed "hot
> updates"? Otherwise, you may be experiencing bloat problems if autovacuum
> is not set aggressively. Did you change default parameters for
> autovacuum? You should. They are set very conservatively right outa the
> box. Also, I wouldn't increase work_mem too much unless you are
> experiencing query spill over to disk. Turn on "log_temp_files" (=0) and
> monitor if you have this spillover. If not, don't mess with work_mem.
> Also, why isn't effective_cache_size set closer to 80-90% of memory instead
> of 50%? Are there other servers on the same host as postgres? As the
> other person mentioned, tune checkpoints so that they do not happen too
> often. Turn on "log_checkpoints" to get more info.
>
> Regards,
> Michael Vitale
>
> Rick Otten wrote on 7/29/2019 8:35 AM:
>
>
> On Mon, Jul 29, 2019 at 2:16 AM Jean Baro <jfbaro(at)gmail(dot)com> wrote:
>
>>
>> We have a new Inventory system running on its own database (PG 10 AWS
>> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than
>> 10GB at the moment. We provided 1TB to get more IOPS from EBS.
>>
>> As we don't have a lot of different products in our catalogue it's quite
>> common (especially when a particular product is on sale) to have a high
>> rate of concurrent updates against the same row. There is also a frequent
>> (every 30 minutes) update to all items which changed their current
>> stock/Inventory coming from the warehouses (SAP), the latter is a batch
>> process. We have just installed this system for a new tenant (one of the
>> smallest one) and although it's running great so far, we believe this
>> solution would not scale as we roll out this system to new (and bigger)
>> tenants. Currently there is up to 1.500 transactions per second (mostly
>> SELECTS and 1 particular UPDATE which I believe is the one being
>> aborted/deadlocked some tImes) in this inventory database.
>>
>> I am not a DBA, but as the DBAs (most of them old school Oracle DBAs who
>> are not happy with the move to POSTGRES) are considering ditching
>> Postgresql without any previous tunning I would like to understand the
>> possibilities.
>>
>> Considering this is a highly concurrent (same row) system I thought to
>> suggest:
>>
>>
>>
> Another thing which you might want to investigate is your checkpoint
> tunables. My hunch is with that many writes, the defaults are probably not
> going to be ideal.
> Consider the WAL tunables documentation:
> https://www.postgresql.org/docs/10/wal-configuration.html
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Lewis | 2019-07-29 18:12:23 | Re: High concurrency same row (inventory) |
Previous Message | Arne Roland | 2019-07-29 16:43:05 | Partial join |