From: | Gaetano Mendola <mendola(at)bigfoot(dot)com> |
---|---|
To: | Simon Stiefel <pgsqlml(at)nuclear-network(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Question on database structure |
Date: | 2003-11-06 00:36:39 |
Message-ID: | 3FA99797.9040304@bigfoot.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Simon Stiefel wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi people,
>
> I want to migrate some old mysql-databases to postgresql.
> With this step I want to optimize some database structures.
>
> I have a (mysql-) database with all zip-codes and cities in germany.
> As there are a lot of them I decided to split them in more tables at that time.
> So now there are 10 tables with zip-codes (those starting with '0' in one table, those starting with '1' in another table, and so on).
>
> I also have all streets of germany with their corresponding zip code.
> Like the zip-code tables the streets are also splitted in 10 tables (same scheme as above).
> My question now is, whether to keep that structure or to throw them together in two big tables?
> For accessing that data it would be easier to make two tables, but I don't know what about performance (cause the street-table would have something about one million tuples)?
You are going to have nightmares for your queries then.
How much rows you will have in these two tables ?
With postgres Tables of milions rows with the right index are queried in
few millisecond.
Regards
Gaetano Mendola
From | Date | Subject | |
---|---|---|---|
Next Message | sgupta5 | 2003-11-06 00:44:48 | |
Previous Message | Tom Lane | 2003-11-05 22:42:58 | Re: SQL_ASCII encoding |