From: | Leon <leon(at)udmnet(dot)ru> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re[2]: [HACKERS] Fwd: Joins and links |
Date: | 1999-07-05 18:09:40 |
Message-ID: | 2965.990705@udmnet.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello Tom,
Monday, July 05, 1999 you wrote:
T> If we did have such a concept, the speed penalties for supporting
T> hard links from one tuple to another would be enormous. Every time
T> you change a tuple, you'd have to try to figure out what other tuples
T> reference it, and update them all.
I'm afraid that's mainly because fields in Postgres have variable
length and after update they go to the end of the table. Am I right?
In that case there could be done such referencing only with
tables with wixed width rows, whose updates can naturally be done
without moving. It is a little sacrifice, but it is worth it.
T> Finally, I'm not convinced that the results would be materially faster
T> than a standard mergejoin (assuming that you have indexes on both the
T> fields being joined) or hashjoin (in the case that one table is small
T> enough to be loaded into memory).
Consider this: no indices, no optimizer thinking, no index lookups -
no nothing! Just a sequential number of record multiplied by
record size. Exactly three CPU instructions: read, multiply,
lookup. Can you see the gain now?
Best regards, Leon
From | Date | Subject | |
---|---|---|---|
Next Message | Leon | 1999-07-05 18:12:39 | Re[2]: [GENERAL] Joins and links |
Previous Message | Bruce Momjian | 1999-07-05 18:00:12 | Re: Re[2]: [GENERAL] Joins and links |