From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Mike Diehl" <mdiehl(at)diehlnet(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Design question |
Date: | 2008-09-17 19:27:23 |
Message-ID: | dcc563d10809171227w6107faddted78b91a080f8a4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Sep 17, 2008 at 11:29 AM, Mike Diehl <mdiehl(at)diehlnet(dot)com> wrote:
> Hi all,
>
> I've got a design question that I need to ask before I go too far down what
> might be the wrong road.
>
> I've got a customer, who has multiple customers, who need to be able to upload
> an excel spreadsheet into Postgres. Then they want to be able to slice and
> dice that data.
>
> The problem is that probably none of these spreadsheets will have the same
> fields in them.
>
> There are two ways to do this, that I can think of...
>
> 1. Create a table for each spreadsheet, using column headings as field names.
> Every field would be a char/varchar. We might have a table to track which
> client owns which table. This could amount to 10's of tables being added to
> the db.
If you choose this method, you might want to split out customers by
schema, to make it easier to manage their tables. Then, all you have
to do is either prefix the customer name in front of the table
reference or set it in your search_path after connecting.
> 2. Create a table in which we store individual cells and associate them with
> an owner. Then each client would essentially have one (huge?) table that
> they can work with.
This is basically going to be an EAV (entity, attribute, value) type
setup. They are notoriously hard to write useful queries against, and
generally a poor performer.
I'd go with option 1 myself.
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2008-09-17 19:38:18 | Re: Design question |
Previous Message | James Strater | 2008-09-17 19:22:00 | Re: Design question |