From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
---|---|
To: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Josh Berkus <josh(at)agliodbs(dot)com>, Nathan Boley <npboley(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Thoughts on statistics for continuously advancing columns |
Date: | 2009-12-30 16:46:49 |
Message-ID: | 4B3B83F9.6070308@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Joshua D. Drake wrote:
> We normally don't notice because most sets won't incur a penalty. We got a customer who
> has a single table that is over 1TB in size... We notice. Granted that is the extreme
> but it would only take a quarter of that size (which is common) to start seeing issues.
>
Right, and the only thing that makes this case less painful is that you
don't really need the stats to be updated quite as often in situations
with that much data. If, say, your stats say there's 2B rows in the
table but there's actually 2.5B, that's a big error, but unlikely to
change the types of plans you get. Once there's millions of distinct
values it's takes a big change for plans to shift, etc.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Hiroshi Saito | 2009-12-30 16:47:52 | Re: test/example does not support win32. |
Previous Message | Tom Lane | 2009-12-30 16:33:44 | Re: test/example does not support win32. |