From: | Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com> |
---|---|
To: | Andrew Sullivan <ajs(at)crankycanuck(dot)ca> |
Subject: | Re: Dynamic Partitioning using Segment Visibility Maps |
Date: | 2008-01-07 20:53:31 |
Message-ID: | 4782914B.6070400@cheapcomplexdevices.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Sullivan wrote:
> On Mon, Jan 07, 2008 at 07:16:35PM +0100, Markus Schiltknecht wrote:
>> ...the requirements: no single tuple in the segment may be
>> significantly out of the average bounds. Otherwise, the min/max gets
>> pretty useless and the segment can never be excluded.
>
> The segment can never be excluded in a search on that key, yes. But
> consider the likely cases we're looking at:
> ...you're searching for data in a given
> date range or for primary (sometimes artificial) identifiers in a range,
> and the source data increases (almost) monotonically.
It seems to me the idea discussed elsewhere in the thread[1]
about configurable segment sizes would make this limitation
much less problematic for some types of data.
Some of my biggest tables are clustered by zip-code; and are insert
mostly. Common queries are "where state_provence='TX'" or
"where city='Dallas'". While I doubt I have enough data to fill
a 1GB segment for any but the largest cities; I certainly have
runs of many consecutive blocks - since clustering by zip tends
to cluster cities as well. Even though the table's not monotonically
increasing or decreasing, like values for cities and states are
clustered together.
Is my understanding right that these Segment Visibility Maps could
help this case as well?
[1] http://archives.postgresql.org/pgsql-hackers/2008-01/msg00065.php
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-01-07 22:12:36 | Re: BUG #3852: Could not create complex aggregate |
Previous Message | ilanco | 2008-01-07 20:33:32 | Re: ERROR: translation failed from server encoding to wchar_t |