From: | Tony Caduto <tony_caduto(at)amsoftwaredesign(dot)com> |
---|---|
To: | Typing80wpm(at)aol(dot)com |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Thanks for insights into internal design |
Date: | 2005-04-28 14:58:55 |
Message-ID: | 4270FA2F.2070502@amsoftwaredesign.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
That's fine for a system like access or DBASE, but you should never be making queries that large for
a production application.
Access or DBASE or any other local FILE based system will not have any problems bringing back 1 million
records because it does not have to bring the records across the wire via TCP/IP.
You should alway limit queries by a date range or at least implement a paging system.
250,000 to 1 million rows is also going to suck up a huge amount of system memory on the client side.
It does not seem like you are really catching on to the concept of a client/server based system.
It does not matter if there is a billion rows because you should NEVER be letting a end user bring back
the full amount anyway. Think about it.
Postgresql is not a local file based system like Access or Dbase, you can't use the same testing methods
or you will be in for a world of hurt.
> You give me valuable insight into the inner workings of such software.
> I am a firm believer in testing everything with very large files. One
> might spend months developing something, and have it in production for a
> year, and not realize what will happen when their files (tables) grow to
> several million records (rows). And it take so little effort to create
> large test files.
From | Date | Subject | |
---|---|---|---|
Next Message | Fritz Bayer | 2005-04-28 15:00:21 | ERROR: Could not convert UTF-8 to ISO8859-1 |
Previous Message | ElayaRaja S | 2005-04-28 14:57:13 | Clarification |