From: | Joshua Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Determining working set size |
Date: | 2012-03-27 19:58:22 |
Message-ID: | 1125348837.158829.1332878302273.JavaMail.root@mail-1.01.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Peter,
Check out pg_fincore. Still kind of risky on a production server, but does an excellent job of measuring page access on Linux.
----- Original Message -----
> Baron Swartz's recent post [1] on working set size got me to
> thinking.
> I'm well aware of how I can tell when my database's working set
> exceeds available memory (cache hit rate plummets, performance
> collapses), but it's less clear how I could predict when this might
> occur.
>
> Baron's proposed method for defining working set size is interesting.
> Quoth:
>
> > Quantifying the working set size is probably best done as a
> > percentile over time.
> > We can define the 1-hour 99th percentile working set size as the
> > portion of the data
> > to which 99% of the accesses are made over an hour, for example.
>
> I'm not sure whether it would be possible to calculate that today in
> Postgres. Does anyone have any advice?
>
> Best regards,
> Peter
>
> [1]:
> http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/
>
> --
> Peter van Hardenberg
> San Francisco, California
> "Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
>
> --
> Sent via pgsql-performance mailing list
> (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua Berkus | 2012-03-27 20:06:17 | Linux machine aggressively clearing cache |
Previous Message | Steve Atkins | 2012-03-26 16:00:18 | Re: anyone tried to use hoard allocator? |