From: | Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com> |
---|---|
To: | Rob Sargentg <robjsargent(at)gmail(dot)com> |
Cc: | Fede Martinez <federicoemartinez(at)gmail(dot)com>, PostgreSql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Altering array(composite-types) without breaking code when inserting them and similar questions |
Date: | 2014-04-27 21:57:24 |
Message-ID: | CANsFX07BghL_SHaxDUQUAt0U3BTrg0fN53LZCCd+9bASetfYYw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Since my alternative is using json, that is heavier (need to store keys in
every row) than composite-types.
Updating an element on a specific composite_type inside an array of them is
done by UPDATE table SET composite[2].x = 24;
So last standing question, is it possible to insert an array of
composite_types by not specifying all of the columns for each
composite_type ?
So if i later add other columns to the composite_type, the insert query
doesn't break ?
Thanks
On Mon, Apr 21, 2014 at 1:46 PM, Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com>wrote:
> Maybe the char array link is wrong ? I don't think an array of arrays is
> good for my case. I'll probably go for json or separate table since it
> looks it's not possible to use composite-types.
>
>
> On Mon, Apr 21, 2014 at 4:02 AM, Rob Sargentg <robjsargent(at)gmail(dot)com>wrote:
>
>> Sorry, I should not have top-posted (Dang iPhone). Continued below:
>>
>> On 04/20/2014 05:54 PM, Dorian Hoxha wrote:
>>
>> Because i always query the whole row, and in the other way(many tables) i
>> will always join + have other indexes.
>>
>>
>> On Sun, Apr 20, 2014 at 8:56 PM, Rob Sargent <robjsargent(at)gmail(dot)com>wrote:
>>
>>> Why do you think you need an array of theType v. a dependent table of
>>> theType. This tack is of course immune to to most future type changess.
>>>
>>> Sent from my iPhone
>>>
>>> Interesting. Of course any decent mapper will return "the whole
>> row". And would it be less disk intensive as an array of "struct ( where
>> struct is implemented as an array)". From other threads [1] [2] I've come
>> to understand the datatype overhead per native type will be applied per
>> type instance per array element.
>>
>> [1] 30K floats<http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-td5790562.html>
>> [2] char array<http://postgresql.1045698.n5.nabble.com/COPY-v-java-performance-comparison-tc5798389.html>
>>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | rob stone | 2014-04-28 01:29:10 | Re: Return and sql tuple descriptions are incompatible |
Previous Message | Tom Lane | 2014-04-27 21:24:08 | Re: Postgresql the right tool (queue using advisory_locks + long transactions) |