From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
Cc: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org>, Jaime Casanova <systemguards(at)gmail(dot)com>, Dan Armbrust <daniel(dot)armbrust(dot)list(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: index row size exceeds btree maximum, 2713 - |
Date: | 2005-07-19 16:26:03 |
Message-ID: | 7061.1121790363@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Scott Marlowe <smarlowe(at)g2switchworks(dot)com> writes:
> On Tue, 2005-07-19 at 10:25, Tom Lane wrote:
>> None of the index types support entries larger than BLOCKSIZE-less-a-bit,
>> so switching to a different index type won't do more than push the
>> problem out by a factor of about 3.
> Are they compressed? It would look to me like maybe they are, or
> something strange like that. When I fed highly compressable data into
> an indexed field, it took a LOT of said text to get a failure method.
Yes, we do try to compress large index entries --- so the BLOCKSIZE or
BLOCKSIZE/3 limitation applies after compression. That's independent
of index type AFAIK. What we don't have is a TOAST table backing every
index to allow out-of-line storage ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Parker | 2005-07-19 16:34:34 | Re: pg_dump and write locks |
Previous Message | Scott Marlowe | 2005-07-19 16:20:32 | Re: index row size exceeds btree maximum, 2713 - |