From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: DBT-3 with SF=20 got failed |
Date: | 2015-09-24 17:42:59 |
Message-ID: | CA+TgmoZ3rsBJNUq6MfaD-bHecH0t1v-CB5++1HwBOpvVSNF2KQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Sep 24, 2015 at 12:40 PM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> There are two machines - one with 32GB of RAM and work_mem=2GB, the other
> one with 256GB of RAM and work_mem=16GB. The machines are hosting about the
> same data, just scaled accordingly (~8x more data on the large machine).
>
> Let's assume there's a significant over-estimate - we expect to get about
> 10x the actual number of tuples, and the hash table is expected to almost
> exactly fill work_mem. Using the 1:3 ratio (as in the query at the beginning
> of this thread) we'll use ~512MB and ~4GB for the buckets, and the rest is
> for entries.
>
> Thanks to the 10x over-estimate, ~64MB and 512MB would be enough for the
> buckets, so we're wasting ~448MB (13% of RAM) on the small machine and
> ~3.5GB (~1.3%) on the large machine.
>
> How does it make any sense to address the 1.3% and not the 13%?
One of us is confused, because from here it seems like 448MB is 1.3%
of 32GB, not 13%.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2015-09-24 17:46:58 | Re: clearing opfuncid vs. parallel query |
Previous Message | Josh Berkus | 2015-09-24 17:29:05 | Re: No Issue Tracker - Say it Ain't So! |