From: | "Castle, Lindsay" <lindsay(dot)castle(at)eds(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | One table or many tables for data set |
Date: | 2003-07-23 00:34:41 |
Message-ID: | B09017B65BC1A54BB0B76202F63DDCCA0532489F@auntm201 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I'm working on a project that has a data set of approximately 6million rows
with about 12,000 different elements, each element has 7 columns of data.
I'm wondering what would be faster from a scanning perspective (SELECT
statements with some calculations) for this type of set up;
one table for all the data
one table for each data element (12,000 tables)
one table per subset of elements (eg all elements that start with
"a" in a table)
The data is static once its in the database, only new records are added on a
regular basis.
I'd like to run quite a few different formulated scans in the longer term so
having efficient scans is a high priority.
Can I do anything with Indexing to help with performance? I suspect for the
majority of scans I will need to evaluate an outcome based on 4 or 5 of the
7 columns of data.
Thanks in advance :-)
Linz
From | Date | Subject | |
---|---|---|---|
Next Message | Reece Hart | 2003-07-23 00:40:01 | slow table updates |
Previous Message | Jord Tanner | 2003-07-22 19:18:52 | Re: Dual Xeon + HW RAID question |