From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Constant time insertion into highly non-unique |
Date: | 2005-04-14 17:36:44 |
Message-ID: | 1113500204.16721.1951.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 2005-04-14 at 12:10 -0400, Tom Lane wrote:
> The first of these should of course force a btree split on the first
> page each time it splits, while the second will involve the
> probabilistic moveright on each split. But the files will be exactly
> the same size.
>
> [tgl(at)rh1 ~]$ time psql -f zdecr10 test
> TRUNCATE TABLE
>
> real 1m41.681s
> user 0m1.424s
> sys 0m0.957s
> [tgl(at)rh1 ~]$ time psql -f zsame10 test
> TRUNCATE TABLE
>
> real 1m40.927s
> user 0m1.409s
> sys 0m0.896s
> [tgl(at)rh1 ~]$
I think thats conclusive.
> So the theory does work, at least for small index entries. Currently
> repeating with wider ones ...
I think we should adjust the probability for longer item sizes - many
identifiers can be 32 bytes and there are many people with a non-unique
URL column for example. An average of over 2 blocks/insert at 16 bytes
is still one too many for my liking, though I do understand the need for
the randomness.
I'd suggest a move right probability of 97% (divide by 16) for itemsz >
16 bytes and 94% (divide by 32) when itemsz >= 128
Though I think functional indexes are the way to go there.
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2005-04-14 17:39:11 | Re: Interactive docs idea |
Previous Message | Tom Lane | 2005-04-14 17:30:02 | Re: Constant time insertion into highly non-unique indexes |