From: | "Steinar H(dot) Gunderson" <sgunderson(at)bigfoot(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large table performance |
Date: | 2007-01-13 01:33:42 |
Message-ID: | 20070113013342.GB3133@uio.no |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, Jan 12, 2007 at 07:40:25PM -0500, Dave Cramer wrote:
> 5000 is pretty low, you need at least 1/4 of memory for an 8.1.x or
> newer server.
Is this the new "common wisdom"? It looks like at some point, someone here
said "oh, and it looks like you're better off using large values here for
8.1.x and newer", and now everybody seems to repeat it as if it was always
well-known.
Are there any real benchmarks out there that we can point to? And, if you set
shared_buffers to half of the available memory, won't the kernel cache
duplicate more or less exactly the same data? (At least that's what people
used to say around here, but I guess the kernel cache gets adapted to the
fact that Postgres won't ask for the most common stuff, ie. the one in the
shared buffer cache.)
/* Steinar */
--
Homepage: http://www.sesse.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Ireneusz Pluta | 2007-01-13 09:21:21 | Physical separation of tables and indexes - where pg_xlog should go? |
Previous Message | Dave Cramer | 2007-01-13 00:40:25 | Re: Large table performance |