From: | RJ Ewing <ewing(dot)rj(at)gmail(dot)com> |
---|---|
To: | Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com> |
Cc: | Poul Kristensen <bcc5226(at)gmail(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: dynamic schema modeling and performance |
Date: | 2017-04-12 01:31:59 |
Message-ID: | B6EC2B58-34AB-4603-96C3-0AB833BB797D@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I thought that might be an answer around here :)
I guess I was looking for what might be a better approach.
Is dynamically creating a table for each entity a bad idea? I can see something like creating a schema for each project (group of related entities) and then creating a table for each schema. I don't expect having more then a few thousand projects anytime soon. We have a relatively targeted audience.
Or would it be better to use jsonb data types and create a denormalized index elsewhere?
> On Apr 11, 2017, at 5:17 PM, Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com> wrote:
>
> If you are asking if you should go nosql, 99% you should not.
>
>> On Tue, Apr 11, 2017 at 10:06 PM, Poul Kristensen <bcc5226(at)gmail(dot)com> wrote:
>> dataverse.org uses Postgresql and is well documented + it is completely user driven. Maybe the concept could be usefull for you. I have installed and configuration a few to be uses for researchers.
>>
>> regards
>> Poul
>>
>>
>> 2017-04-11 19:46 GMT+02:00 Rj Ewing <ewing(dot)rj(at)gmail(dot)com>:
>>> I'm looking for thoughts on the best way to handle dynamic schemas.
>>>
>>> The application I am developing revolves around user defined entities. Each entity is a tabular dataset with user defined columns and data types. Entities can also be related to each other through Parent-Child relationships. Some entities will be 100% user driven, while others (such as an entity representing a photo) will be partially user driven (all photo entities will have common fields + custom user additions).
>>>
>>> I was hoping to get opinions on whether postgresql would be a suitable backend. A couple of options I have thought of are:
>>>
>>> 1. Each entity is represented as a table in psql. The schema would be dynamically updated (with limits) when an entity mapping is updated. I believe that this would provide the best data constraints and allow the best data normalization. A concern I have is that there could be an enormous amount of tables generated and the performance impacts this might have in the future. I could then run elasticsearch as a denormalized cache for efficient querying and full-text-search.
>>>
>>> 2. Use a nosql database. This provides the "dynamic" schema aspect. A concern here is the lack of relation support, thus leading to a more denormalized data structure and the potential for the data to become corrupted.
>>>
>>> Any opinions on the use of psql for this case, or other options would be greatly appreciated!
>>>
>>> RJ
>>
>>
>>
>> --
>> Med venlig hilsen / Best regards
>> Poul Kristensen
>> Linux-OS/Virtualizationexpert and Oracle DBA
>
From | Date | Subject | |
---|---|---|---|
Next Message | Muhammad Hanif Abdul Hamid | 2017-04-12 02:23:57 | Re: postgresql-plperl-9.4 : Depends: libperl5.18 (>= 5.18.2) but it is not installable |
Previous Message | RJ Ewing | 2017-04-12 01:18:39 | Re: dynamic schema modeling and performance |