From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-perform <pgsql-performance(at)postgresql(dot)org> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [HACKERS] Bad n_distinct estimation; hacks suggested? |
Date: | 2005-04-25 19:18:26 |
Message-ID: | 200504251218.27072.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Guys,
> While it's not possible to get accurate estimates from a fixed size sample,
> I think it would be possible from a small but scalable sample: say, 0.1% of
> all data pages on large tables, up to the limit of maintenance_work_mem.
BTW, when I say "accurate estimates" here, I'm talking about "accurate enough
for planner purposes" which in my experience is a range between 0.2x to 5x.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Hans-Jürgen Schönig | 2005-04-25 19:22:21 | Re: Constant WAL replay |
Previous Message | Josh Berkus | 2005-04-25 19:13:18 | Re: [HACKERS] Bad n_distinct estimation; hacks suggested? |
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2005-04-25 20:14:25 | half the query time in an unnecessary(?) sort? |
Previous Message | Josh Berkus | 2005-04-25 19:13:18 | Re: [HACKERS] Bad n_distinct estimation; hacks suggested? |