From: | Thomas Kellerer <spam_eater(at)gmx(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | How does Postgres estimate the memory needed for sorting/aggregating |
Date: | 2017-01-25 20:59:25 |
Message-ID: | o6b3j9$evt$1@blaine.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
There was a question on dba.stackexchange recently:
http://dba.stackexchange.com/a/162117/1822
That question (and the answer) deals with performance difference of a query caused by the _declared_ length of a VARCHAR column in SQL Server (everything else being equal - especially the actual data length)
For the curios: it does make a (big) difference in performance if you declare varchar(100) or varchar(2000) in SQL Server - something that really surprised me.
The difference in performance in SQL Servers seems to be caused by SQL Server's optimizer that uses the _declared_ length of a column to estimate the memory needed for the aggregation (or sorting).
Now, we all know that there is no performance difference whatsoever for varchar columns regardless of the declared length.
In one of the comments, to that answer the question was asked how Postgres knows how much memory it needs to allocate to do the aggregation.
I guess this is based on the column statistics stored in pg_stats, but I am not sure:
So here is my question: how does Postgres estimate/know the memory needed for the aggregation? Or does it dynamically resize the memory if the initial assumption was wrong?
Thomas
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2017-01-25 21:46:56 | Re: How does Postgres estimate the memory needed for sorting/aggregating |
Previous Message | ProPAAS DBA | 2017-01-25 20:30:11 | Re: psql only works with -h (even localhost) |