On 02/19/2015 10:19 AM, brian wrote:
> On Thu, 19 Feb 2015 09:30:57 -0700, you wrote:
>
>> On 02/19/2015 09:10 AM, brian wrote:
>>> Hi folks,
>>>
>>> I have a single-user application which is growing beyond the
>>> fixed-format data files in which it currently holds its data, I need a
>>> proper database as the backend. The front end is written using Lazarus
>>> and FreePascal under Linux, should anyone feel that makes a
>>> difference. The database will need to grow to around 250,000 records.
>>>
>>> My problem is with the data field which is the (unique) key. It's
>>> really a single 192-bit integer (it holds various bits of bitmapped
>>> data) which I currently hold as six 32-bit integers, but can convert
>>> if needed when transferring the data.
>>>
>>> How would you advise that I hold this field in a Postgres database,
>>> given the requirement for the whole thing to be a unique key? The
>>> first 64 bits change relatively infrequently, the last 128 bits will
>>> change with virtually every record. The last 128 bits will ALMOST be
>>> unique in themselves, but not quite. :(
>>>
>>> Thanks,
>>>
>>> Brian.
>>>
>>>
>> If your application understands/parses/makes use of the data in those
>> 192 bites, I would reload with an additional unique id field. For the
>> intended number of rows of data a sequence would be fine, though I'm
>> partial to UUIDs. Alternatively map the 192 bytes to two fields and make
>> a unique key of both of them. Third alternative would be to use a binary
>> BitString a suggested by Brian.
>
> Thanks. The purpose of the field is purely as a check against the user
> feeding the same data in twice. Once I've constructed it, I never pull
> the field apart again. It had to be done this way, as otherwise the
> boolean statement to check for uniqueness was horrendous.
>
> Brian.
>
>
>
Then B. Dunavant's suggestion is probably best. Certainly easiest. How
(else) does your app or reporting query this data? That could also
effect your choice.