From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Size of Path nodes |
Date: | 2015-12-05 17:19:53 |
Message-ID: | CA+TgmoatfVix5_ZuyAYB6YbcuYzyYyaALUDXaYJHPCVxiEG0Qw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Dec 4, 2015 at 4:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> If it's really true that the extra byte I added there has doubled
> planner memory use, then that's definitely cause for concern.
> However, I am skeptical that's really what has happened here. Not
> every path has crossed a power-of-two threshold, and paths are not the
> only things the planner allocates. What's a reasonable way to assess
> the effect of this on planner memory use in general?
So I did a really crude test of this. I put
MemoryContextStats(MessageContext) - which seems to be where the
planner garbage is going - at the end of the message-processing loop,
and then ran the regression tests with and without parallel_aware in
the Path structure. Then I ran a little grep and awk magic over the
postmaster logs and compared the sizes of contexts. For reasons I
haven't tracked down, the number of instrumentation lines I got with
and without the flag differed. But the overall pattern seems pretty
clear. In the results below, the "without" number is the number of
times MessageContext had allocated the specified amount of storage
space without the parallel_aware flag; the "with" number is the number
of times it had allocated the specified amount of storage with the
parallel_aware flag.
size 8192 without 7589 with 7605
size 16384 without 6074 with 6078
size 16448 without 1 with 1
size 24576 without 26 with 27
size 24640 without 75 with 68
size 26448 without 3 with 3
size 32768 without 1747 with 1760
size 36512 without 0 with 1
size 42832 without 1 with 1
size 57344 without 7 with 9
size 57520 without 151 with 152
size 65536 without 1319 with 1349
size 66448 without 1 with 1
size 73728 without 4 with 5
size 73792 without 1 with 1
size 73904 without 2 with 2
size 116448 without 4 with 4
size 122880 without 4 with 4
size 131072 without 631 with 638
size 139264 without 12 with 12
size 216512 without 4 with 4
size 262144 without 496 with 504
size 270336 without 5 with 5
size 316448 without 1 with 1
size 516448 without 2 with 2
size 524288 without 73 with 74
size 532480 without 1 with 1
size 816512 without 1 with 1
size 1048576 without 19 with 19
size 1216448 without 1 with 0
size 2097152 without 4 with 5
queries_with=18337 queries_without=18259 total_with=612886960
total_without=605744096
What I think this is showing is that making Path bigger occasionally
pushes palloc over a boundary so that it allocates another chunk, but
most of the time t doesn't. Also, it suggests to me that if we're
concerned about keeping memory utilization tight on these kinds of
queries, we could think about changing palloc's allocation pattern.
For example, if we did 8k, 16k, 32k, 64k, 64k, 64k, 64k, 128k, 128k,
128k, 128k, 256k, 256k, 256k, 256k ... a lot of these queries would
consume less memory.
If there's a strong feeling that I should find a way to make this
testing more rigorous, I'm willing to do so, but I suspect that we're
not going to find anything very exciting here. A more likely angle of
investigation here is to try to figure out what a worst case for
enlarging the Path structure might look like, and test that. I don't
have a brilliant idea there right at the moment, but I'll mull it
over.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2015-12-05 17:35:04 | Re: Size of Path nodes |
Previous Message | Tom Lane | 2015-12-05 17:07:41 | Re: [PATCH] Equivalence Class Filters |