From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andreas Hartmann <andreas(at)apache(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Modeling a table with arbitrary columns |
Date: | 2009-10-31 13:03:30 |
Message-ID: | 603c8f070910310603o12132210h4403be0dbac92fe0@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Oct 29, 2009 at 4:52 PM, Andreas Hartmann <andreas(at)apache(dot)org> wrote:
> Hi everyone,
>
> I want to model the following scenario for an online marketing application:
>
> Users can create mailings. The list of recipients can be uploaded as
> spreadsheets with arbitrary columns (each row is a recipient). I expect the
> following maximum quantities the DB will contain:
>
> * up to 5000 mailings
> * 0-10'000 recipients per mailing, average maybe 2000
> * approx. 20 columns per spreadsheet
>
> I see basically two approaches to store the recipients:
>
> A) A single table with a fixed number of generic columns. If the spreadsheet
> has less columns than the table, the values will be null.
>
> CREATE TABLE recipient (
> mailing integer,
> row integer,
> col_1 text,
> …
> col_50 text,
> PRIMARY KEY (mailing, row),
> FOREIGN KEY mailing REFERENCES mailing(id)
> );
>
>
> B) Two tables, one for the recipients and one for the values:
>
> CREATE TABLE recipient (
> mailing integer,
> row integer,
> PRIMARY KEY (mailing, row),
> FOREIGN KEY mailing REFERENCES mailing(id)
> );
>
> CREATE TABLE recipient_value (
> mailing integer,
> row integer,
> column integer,
> value text,
> PRIMARY KEY (mailing, row, column),
> FOREIGN KEY mailing REFERENCES mailing(id),
> FOREIGN KEY row REFERENCES recipient(row)
> );
>
>
> I have the feeling that the second approach is cleaner. But since the
> recipient_value table will contain approx. 20 times more rows than the
> recipient table in approach A, I'd expect a performance degradation.
>
> Is there a limit to the number of rows that should be stored in a table?
> With approach B the maximum number of rows could be about 200'000'000, which
> sounds quite a lot …
>
> Thanks a lot in advance for any suggestions!
Another possibility would be to create a table for each upload based
on the columns that are actually present. Just have your upload
script read the spreadsheet, determine the format, and create an
appropriate table for that particular upload.
But a lot of it depends on what kinds of queries you want to write.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Shaul Dar | 2009-11-01 14:51:53 | Compression in PG |
Previous Message | Greg Stark | 2009-10-31 08:55:34 | Re: Queryplan within FTS/GIN index -search. |