From: | Israel Brewster <israel(at)ravnalaska(dot)net> |
---|---|
To: | Kevin Grittner <kgrittn(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Improve PostGIS performance with 62 million rows? |
Date: | 2017-01-10 23:12:59 |
Message-ID: | C41AA42C-C008-4D79-9BD2-AEC5E040972B@ravnalaska.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On Jan 9, 2017, at 1:54 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>
> On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <israel(at)ravnalaska(dot)net> wrote:
>
>> [load of new data]
>
>> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (actual
>> time=225998.319..225998.320 rows=1 loops=1)
>
>> [...] I ran the query again [...]
>
>> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (actual
>> time=9636.165..9636.166 rows=1 loops=1)
>
>> So from four minutes on the first run to around 9 1/2 seconds on the second.
>> Presumably this difference is due to caching?
>
> It is likely to be, at least in part. Did you run VACUUM on the
> data before the first run? If not, hint bits may be another part
> of it. The first access to each page after the bulk load would
> require some extra work for visibility checking and would cause a
> page rewrite for the hint bits.
That could be - I had planned to run a VACUUM ANALYZE after creating the indexes, but forgot. By the time I got around to running the second query, autovacuum should have kicked in and done it for me.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2017-01-11 00:06:31 | Re: requested timeline doesn't contain minimum recovery point |
Previous Message | Craig Boucher | 2017-01-10 22:30:28 | Re: Table Design for Many Updates |