From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: add_path optimization |
Date: | 2009-02-03 03:48:45 |
Message-ID: | 20090203034845.GS8123@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> I'm interested, but I need maybe a 1GB data set, or smaller. The
> thing that we are benchmarking is the planner, and planning times are
> related to the complexity of the database and the accompanying
> queries, not the raw volume of data. (It's not size that matters,
> it's how you use it?) In fact, in a large database, one could argue
> that there is less reason to care about the planner, because the
> execution time will dominate anyway. I'm interested in complex
> queries in web/OLTP type applications, where you need the query to be
> planned and executed in 400 ms at the outside (and preferably less
> than half of that).
We prefer that our geocoding be fast... :) Doing 1 state should give
you about the right size (half to 1G of data). I'll try to put together
a good test set this week.
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-02-03 03:54:55 | Re: add_path optimization |
Previous Message | Robert Haas | 2009-02-03 03:45:16 | Re: add_path optimization |