From: | Kevin Grittner <kgrittn(at)gmail(dot)com> |
---|---|
To: | Israel Brewster <israel(at)ravnalaska(dot)net> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Improve PostGIS performance with 62 million rows? |
Date: | 2017-01-09 22:54:11 |
Message-ID: | CACjxUsNOmjoHrMjJNmMR+Hso2oHRCr1qosSa6xDmdMB9q-V6VA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <israel(at)ravnalaska(dot)net> wrote:
> [load of new data]
> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=225998.319..225998.320 rows=1 loops=1)
> [...] I ran the query again [...]
> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=9636.165..9636.166 rows=1 loops=1)
> So from four minutes on the first run to around 9 1/2 seconds on the second.
> Presumably this difference is due to caching?
It is likely to be, at least in part. Did you run VACUUM on the
data before the first run? If not, hint bits may be another part
of it. The first access to each page after the bulk load would
require some extra work for visibility checking and would cause a
page rewrite for the hint bits.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2017-01-09 23:27:58 | Re: Matching indexe for timestamp |
Previous Message | Tom DalPozzo | 2017-01-09 22:47:52 | Re: checkpoint clarifications needed |