From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Rick Gigger <rick(at)alpinenetworking(dot)com>, "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Martijn van Oosterhout <kleptog(at)svana(dot)org>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: DB cache size strategies |
Date: | 2004-02-11 22:49:37 |
Message-ID: | 200402112249.37046.dev@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wednesday 11 February 2004 21:40, Rick Gigger wrote:
> Has anyone discussed adding to postgres the ability to figure this out
> on it's own. Couldn't it gather some statistics about the kind of
> resources that it is actually using and adjust accordingly. You could
> give it a max amount to use for the shared buffers but if it was so high
> that it degraded performance postgres could just cut back on what it
> actually used.
>
> Is this even feasile? Correct me if I am wrong but it seems that most
> other dbs seem to work this way.
>
> It would make installing a nice tuned postgres a much more turn key
> operation.
What if making the DB run faster makes everything else run slower? How does it
know whether 0.1sec i/o time was just it, or if there was another process
contending for disk access?
Then again, maybe interactive speed isn't important, but your bulk update is.
Or, perhaps your report can wait, but a realtime response is vital.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-02-11 23:12:51 | Re: Prepared queries |
Previous Message | Eric Ridge | 2004-02-11 22:38:45 | ps output and postgres |