Re: INDEX Performance Issue

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Mark Davidson <mark(at)4each(dot)co(dot)uk>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: INDEX Performance Issue
Date: 2013-04-15 19:30:51
Message-ID: CAMkU=1wpgv=QuOjXy2rOhXUFUH4cMTa3t+8RoRxr72DQy0r2ew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Apr 5, 2013 at 8:51 AM, Mark Davidson <mark(at)4each(dot)co(dot)uk> wrote:

> Hi All,
>
> Hoping someone can help me out with some performance issues I'm having
> with the INDEX on my database. I've got a database that has a data table
> containing ~55,000,000 rows which have point data and an area table
> containing ~3,500 rows which have polygon data. A user queries the data by
> selecting what areas they want to view and using some other filters such as
> datatime and what datasets they want to query. This all works fine and
> previously the intersect of the data rows to the areas was being done on
> the fly with PostGIS ST_Intersects. However as the data table grow we
> decided it would make sense to offload the data processing and not
> calculate the intersect for a row on the fly each time, but to
> pre-calculate it and store the result in the join table. Resultantly this
> produce a table data_area which contains ~250,000,000 rows.
>

I think your old method is likely the better option, especially if the
intersect can be offloaded to the client or app server (I don't know enough
about ST_Intersects to know how likely that is).

What is the difference in performance between the old method and the new
method?

Cheers,

Jeff

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Janes 2013-04-15 19:37:33 Re: INDEX Performance Issue
Previous Message Atri Sharma 2013-04-15 17:54:30 Re: Advice on testing buffer spin lock contention