From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)Sun(dot)COM> |
Cc: | Greg Smith <gsmith(at)gregsmith(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Benchmark Data requested |
Date: | 2008-02-05 09:08:14 |
Message-ID: | 1202202494.4252.631.camel@ebony.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 2008-02-04 at 17:55 -0500, Jignesh K. Shah wrote:
> Doing it at low scales is not attractive.
>
> Commercial databases are publishing at scale factor of 1000(about 1TB)
> to 10000(10TB) with one in 30TB space. So ideally right now tuning
> should start at 1000 scale factor.
I don't understand this. Sun is currently publishing results at 100GB,
300GB etc.. Why would we ignore those and go for much higher numbers?
Especially when you explain why we wouldn't be able to. There isn't any
currently valid result above 10 TB.
If anybody is going to run tests in response to my request, then *any*
scale factor is interesting, on any hardware. If that means Scale Factor
1, 3, 10 or 30 then that's fine by me.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Lunnon | 2008-02-05 09:32:21 | Re: Performance problems inside a stored procedure. |
Previous Message | Simon Riggs | 2008-02-05 08:45:36 | Re: Benchmark Data requested |