From: | Kevin Old <kold(at)carolina(dot)rr(dot)com> |
---|---|
To: | pgsql <pgsql-general(at)postgresql(dot)org> |
Subject: | Matching unique primary keys |
Date: | 2002-10-22 21:12:43 |
Message-ID: | 1035321163.4491.139.camel@oc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello all,
I have a table of data that is very large (502 columns) and whenever I
query it, I only query on up to 4 columns and on average only 20 fields
are returned. I've come up with a way to increase the speed of the
queries. I could could put the 20 fields that are searched and returned
into a separate table and then link them by a unique id. Sounds
easy......but I'm stuck on something.
I am dealing with about 1 million records per day.
One option is to put a sequence on the tables, but dealing with so many
records I'd have to use a BIGINT and with a ceiling of
9223372036854775807 it seems to me that numbers this large might slow
down the query process. I realize it would take quite a while to get to
this point, but would like other's opinions.
Another option is that I have 3 fields that when combine, make each
record unique. Is there some way I can combine these dynamically and
then use "views" of them to reference my records and display them from
the larger table.
I realize this is very confusing, please get back with me if I should
elaborate on something.
Any help is appreciated.
Thanks,
Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-10-22 21:14:29 | Contracting job in SF area |
Previous Message | Paulo Henrique Baptista de Oliveira | 2002-10-22 20:00:25 | Re: PostgreSQL for Windows 2000 |