From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Greg Smith <gsmith(at)gregsmith(dot)com> |
Cc: | Richard Huxton <dev(at)archonet(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Simple postgresql.conf wizard |
Date: | 2008-11-18 17:39:23 |
Message-ID: | 4922FDCB.5050002@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg,
> To give you an idea how overdiscussed this general topic is, I just sent
> a message to Josh suggesting we might put database size into tiers and
> set some parameters based on that. Guess what? That was his idea the
> last time around, I subconsciously regurgitated it:
> http://archives.postgresql.org/pgsql-performance/2007-06/msg00602.php
Heh, no wonder it sounded good.
However, after a year more of experience, I'd suggest that we solicit
specific type from the user rather than determining it strictly from
database size. The specific elements of a "DW" use-case aren't
necessarily tied to size. They are:
* data comes in in large batches rather than individual rows
* small numbers of users
* large complex queries
For example, right now, I'm refactoring a database which is only 15GB,
but is definitely DW behavior, so we want to keep max_connections to <
20 and turn autovaccum off.
So I think we should ask the user what kind of DB they have (*with* docs
which explain what the types mean) and fall back to testing by size if
the info is not supplied.
Regarding the level of default_stats_target, it sounds like people agree
that it ought to be raised for the DW use-case, but disagree how much.
If that's the case, what if we compromize at 50 for "mixed" and 100 for
DW? That should allay people's fears, and we can tinker with it when we
have more data.
--Josh
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-11-18 17:40:38 | Re: Block-level CRC checks |
Previous Message | Tom Lane | 2008-11-18 17:37:26 | Re: Block-level CRC checks |