From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Bruno Wolff III <bruno(at)wolff(dot)to> |
Cc: | KÖPFERL Robert <robert(dot)koepferl(at)sonorys(dot)at>, "'dpandey(at)secf(dot)com'" <dpandey(at)secf(dot)com>, pgsql-general(at)postgresql(dot)org, 'PostgreSQL' <pgsql-sql(at)postgresql(dot)org> |
Subject: | Re: [SQL] index row size 2728 exceeds btree maximum, 27 |
Date: | 2005-06-02 17:00:17 |
Message-ID: | 429F3B21.9020209@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-sql |
Bruno Wolff III wrote:
> On Thu, Jun 02, 2005 at 13:40:53 +0100,
> Richard Huxton <dev(at)archonet(dot)com> wrote:
>
>>Actually, Dinesh didn't mention he was using this for the speed of
>>lookup. He'd defined the columns as being the PRIMARY KEY, presumably
>>because he feels they are/should be unique. Given that they are rows
>>from a logfile, I'm not convinced this is the case.
>
>
> Even for case you could still use hashes. The odds of a false collision
> using SHA-1 are so small that some sort of disaster is more likely.
> Another possibility is if there are a fixed number of possible messages,
> is that they could be entered in their own table with a serail PK and
> the other table could reference the PK.
Certainly, but if the text in the logfile row is the same, then hashing
isn't going to make a blind bit of difference. That's the root of my
concern, and something only Dinesh knows.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Vivek Khera | 2005-06-02 17:27:20 | Re: Using pg_dump in a cron |
Previous Message | Bruno Wolff III | 2005-06-02 16:35:42 | Re: [SQL] index row size 2728 exceeds btree maximum, 27 |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2005-06-02 19:33:24 | Re: [SQL] index row size 2728 exceeds btree maximum, 27 |
Previous Message | Bruno Wolff III | 2005-06-02 16:35:42 | Re: [SQL] index row size 2728 exceeds btree maximum, 27 |