From: | Howard Wilkinson <howard(at)cohtech(dot)com> |
---|---|
To: | Tino Wildenhain <tino(at)wildenhain(dot)de> |
Cc: | pgsql-general(at)postgresql(dot)org, mydns-users(at)lists(dot)sourceforge(dot)net, mydns-ng-users(at)lists(dot)sourceforge(dot)net, mydns-ng-devel(at)lists(dot)sourceforge(dot)net |
Subject: | Re: Querying the schema for column widths - what syntax do I use? |
Date: | 2008-02-22 11:38:33 |
Message-ID: | 47BEB439.8050500@cohtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tino Wildenhain wrote:
> Hi Howard,
>
> Howard Wilkinson wrote:
>> I am working on some upgrades to the MyDNS open source product. I
>> have some expertise in MySQL but am not overly familiar with
>> PostgreSQL and need some guidance on how to query the schema for the
>> maximum size of data a column can hold.
>>
>> In MySQL I can do either: a "DESCRIBE" command or do "SELECT COLUMNS"
>> command. And then parse the result for the length in the type column
>> of the row returned. How would I do a similar function using
>> PostgreSQL - I have tried to find this in the manuals and in this
>> mailing list but not found any pointers to get me started.
>>
>> Apologies for asking such a simple question but I am being a bit lazy
>> as I want to get on with releasing the MyDNS code.
>
> beside the correct answers you got relating the informational_schema,
> since I do not know what MyDNS is and what you are doing with the
> maximum size of the column, are you aware that postgresql bails out
> if you put in a string which exceeds the column size (so you can just
> try rather then check beforehand if thats what you do) or you also
> get the description in the cursor when you do the select on a table.
>
> Also, text type could be used to hold potentially large strings without
> harm (so if the string is short, its no difference but you can easily go
> up to over a gig)
>
> Regards
> Tino
The package is a DNS server originally hosted on top of a MySQL data
base. I am extending it in a number of ways, but this particular need
arises as I need to store some data (binary in nature) in a field that
is part of the key for an index. The data is < 65536 in length. However,
most of the time it is <<<<<<65536 e.g. 4 bytes. I also need to support
backwards compatibility with the previous releases which had limited
storage capacity in this field.
I have therefore chosen to detect when the field overflows the maximum
storage capacity for the 'data' column and split the data into 'data'
(truncated) and 'edata' (the rest). As I do not know what size the
'data' field is I needed to detect it dynamically and do the split
before storing the data/edata. I have defined a further column
'edatakey' which takes an MD5 has of 'edata' when present and is
included in the index on the table.
It all seems to be working on MySQL 5.0.46 - have not tested the MD5
code yet - and I hope will with PostgreSQL when somebody tries it.
Howard.
From | Date | Subject | |
---|---|---|---|
Next Message | Naz Gassiep | 2008-02-22 11:53:32 | Unique indicies |
Previous Message | Richard Huxton | 2008-02-22 11:26:05 | Re: Selecting large objects stored as bytea |