From: | "MauMau" <maumau307(at)gmail(dot)com> |
---|---|
To: | "Magnus Hagander" <magnus(at)hagander(dot)net> |
Cc: | "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>, "Bruce Momjian" <bruce(at)momjian(dot)us>, "Josh Berkus" <josh(at)agliodbs(dot)com> |
Subject: | Re: Auto-tuning work_mem and maintenance_work_mem |
Date: | 2013-10-15 12:41:18 |
Message-ID: | F62D9BD026D54AAB8CD03226FECB1139@maumau |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
From: "Magnus Hagander" <magnus(at)hagander(dot)net>
> On Oct 12, 2013 2:13 AM, "MauMau" <maumau307(at)gmail(dot)com> wrote:
>> I'm not sure if many use XA features, but I saw the questions and answer
> a few times, IIRC. In the trouble situation, PostgreSQL outputs an
> intuitive message like "increase max_prepared_transactions", so many users
> might possibly have been able to change the setting and solve the problem
> themselves without asking for help, feeling stress like "Why do I have to
> set this?" For example, max_prepared_transactions is called "hideous
> creature" in the following page:
>>
>> https://community.jboss.org/wiki/InstallPostgreSQLOnFedora?_sscc=t
>
> Anybody who follows that page is screwed anyway. I notice they recommend
> running regular VACUUM FULL across the whole database, so it's obvious
> they
> know nothing about postgresql. There's nothing we can do about what people
> write on random pages around the Internet.
Regular VACUUM FULL is certainly overkill. Apart from that, having to set
max_prepared_transactions seems to make PostgreSQL difficult for people with
that level of knowledge, doesn't it? I wonder if there are other major
DBMSs which require marameter configuration and server restart to use
distributed transactions.
>
>> According to the below page, the amount of memory consumed for this is
> "(770 + 270 * max_locks_per_transaction) * max_prepared_transactions".
> With the default setting of maxconnections=100 and
> max_locks_per_transaction=64, this is only 180KB. So the overhead is
> negligible.
>
> You are assuming memory is the only overhead. I don't think it is.
Having a quick look at the source code, just setting
max_prepared_transactions to non-zero seems to produce almost no processing
overhead.
>> If the goal is to make PostgreSQL more friendly and run smoothly without
> frustration from the start and not perfect tuning, I think
> max_prepared_transactions=max_connections is an easy and good item. If
> the
> goal is limited to auto-tuning memory sizes, this improvement can be
> treated separately.
>
> Frankly, I think we'd help 1000 times more users of we enabled a few wal
> writers by default and jumped the wal level. Mainly so they could run one
> off base backup. That's used by orders of magnitude more users than XA.
Agreed. The default of non-zero max_wal_senders and wal_level > 'archive'
would be beneficial for more users. Likewise, non-zero
max_prepared_transactons would improve the impression of PostgreSQL (for
limited number of users, though), and it wouldn't do any harm.
Regards
MauMau
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-10-15 12:42:20 | Re: logical changeset generation v6.2 |
Previous Message | Andres Freund | 2013-10-15 12:33:03 | Re: removing old ports and architectures |