From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | Jan Wieck <wieck(at)debis(dot)com>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Re: Jesus, what have I done (was: LONG) |
Date: | 1999-12-12 19:55:36 |
Message-ID: | 3853FDB8.DA34A6E0@tm.ee |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian wrote:
>
> > > If most joins, comparisons are done on the 10% in the main table, so
> > > much the better.
> >
> > Yes, but how would you want to judge which varsize value to
> > put onto the "secondary" relation, and which one to keep in
> > the "primary" table for fast comparisions?
>
> There is only one place in heap_insert that checks for tuple size and
> returns an error if it exceeds block size. I recommend when we exceed
> that we scan the tuple, and find the largest varlena type that is
> supported for long relations, and set the long bit and copy the data
> into the long table. Keep going until the tuple is small enough, and if
> not, throw an error on tuple size exceeded. Also, prevent indexed
> columns from being made long.
And prevent indexes from being created later if fields in some recorde
are made long ?
Or would it be enogh here to give out a warning ?
Or should one try to re-pack these tuples ?
Or, for tables that have mosty 10-char fields bu an occasional 10K field
we could possibly approach the indexes as currently proposed for tables,
i.e. make the index's data part point to the same LONG relation ?
The latter would probably open another can of worms.
---------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 1999-12-12 20:45:56 | Re: [HACKERS] LONG |
Previous Message | Tom Lane | 1999-12-12 19:15:47 | Work plan: aggregate(DISTINCT ...) |