From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Balazs Nagy" <bnagy(at)thenewpush(dot)com>, <pgsql-bugs(at)postgresql(dot)org>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: [PERFORM] BUG #2737: hash indexing large table fails, while btree of same index works |
Date: | 2006-11-11 08:17:54 |
Message-ID: | 1163233075.3634.944.camel@silverbirch.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-performance |
On Fri, 2006-11-10 at 18:55 -0500, Tom Lane wrote:
> [ cc'ing to pgsql-performance because of performance issue for hash indexes ]
>
> "Balazs Nagy" <bnagy(at)thenewpush(dot)com> writes:
> > Database table size: ~60 million rows
> > Field to index: varchar 127
>
> > CREATE INDEX ... USING hash ...
I'd be interested in a performance test that shows this is the best way
to index a table though, especially for such a large column. No wonder
there is an 8GB index.
> One thought that comes to mind is to require hash to do an smgrextend()
> addressing the last block it intends to use whenever it allocates a new
> batch of blocks, whereupon md.c could adopt a saner API: allow
> smgrextend but not other calls to address blocks beyond the current EOF.
> Thoughts?
Yes, do it.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-11-12 04:28:50 | Re: [PATCHES] BUG #2704: pg_class.relchecks overflow problem |
Previous Message | Tom Lane | 2006-11-10 23:55:51 | Re: BUG #2737: hash indexing large table fails, while btree of same index works |
From | Date | Subject | |
---|---|---|---|
Next Message | Guy Thornley | 2006-11-13 08:32:05 | Re: Lying drives [Was: Re: Which OS provides the _fastest_ PostgreSQL performance?] |
Previous Message | Tom Lane | 2006-11-10 23:55:51 | Re: BUG #2737: hash indexing large table fails, while btree of same index works |