From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2016-09-06 18:16:11 |
Message-ID: | CAGTBQpYLNZFwxkHFMY-W17Ju7Ssof18+yKf7FgrZiXweZWb3Ag@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 6, 2016 at 3:11 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> We could get around (1) by something like Robert's idea of segmented
> allocation, but TBH I've seen nothing on this thread to make me think
> it's necessary or would even result in any performance improvement
> at all. The bigger we make that array, the worse index-cleaning
> is going to perform, and complicating the data structure will add
> another hit on top of that.
I wouldn't be so sure, I've seen cases where two binary searches were
faster than a single binary search, especially when working with
humongus arrays like this tid array, because touching less (memory)
pages for a search does pay off considerably.
I'd try before giving up on the idea.
The test results (which I'll post in a second) do give credit to your
expectation that making the array bigger/more complex does impact
index scan performance. It's still faster than scanning several times
though.
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2016-09-06 18:16:27 | Re: Vacuum: allow usage of more than 1GB of work mem |
Previous Message | Peter Eisentraut | 2016-09-06 18:14:29 | Re: Logical Replication WIP |