Re: high transaction rate

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: high transaction rate
Date: 2016-12-07 23:54:23
Message-ID: 2dbcba50-27e0-ca4e-b721-44ed0c5c0dbb@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 12/07/2016 03:32 PM, John R Pierce wrote:
> On 12/7/2016 2:23 PM, Rob Sargent wrote:
>> How does your reply change, if at all, if:
>> - Fields not index
>> - 5000 hot records per 100K records (millions of records total)
>> - A dozen machines writing 1 update per 10 seconds (one machine
>> writing every 2 mins)
>> - - each to a different "5000"
>> or (two modes of operation)
>> - - each to same "5000"
>>
>> My guess this would be slow enough even in the second mode? Or at
>> this rate and style should I care?
>> Sorry for taking this off from OP's point
>
> thats 1 update of 5000 records every 2 minutes per each of 12 client
> hosts? thats still a fair amount of tuples/second and in a table
> with millions of records, the vacuum will have a lot more to go through.
>
> 9.6 has some potentially significant enhancements in how autovacuum
> operates with respect to incrementally freezing blocks.
>
>
> if you think your update patterns can take advantage of HOT, its a
> good idea to set the FILL_FACTOR of the table prior to populating it,
> maybe to 50% ? this will make the initial table twice as large, but
> provide freespace in every block for these in-block HOT operations.
>
> for a table that large, you'll definitely need to crank up the
> aggressiveness of autovacuum if you hope to keep up with that number
> of stale tuples distributed among millions of records.
>
>
Much appreciated - endOfOffTopic :)

In response to

Browse pgsql-general by date

  From Date Subject
Next Message metaresolve 2016-12-08 00:02:51 Re: Problems Importing table to pgadmin
Previous Message John R Pierce 2016-12-07 23:47:10 Re: Problems Importing table to pgadmin