| From: | "Derek Hamilton" <derek(at)capweb(dot)com> |
|---|---|
| To: | <pgsql-admin(at)postgresql(dot)org> |
| Subject: | Performance Expectations |
| Date: | 2003-04-18 18:24:01 |
| Message-ID: | 000501c305d7$b001a070$1b01a8c0@jcaves.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Hello all,
We're using PostgresQL with a fairly large database (about 2GB). I have one
table that currently exceeds 4.5 million records and will probably grow to
well over 5 fairly soon. The searching of this table is basically done on
one field, which field I have set up a btree index on. My question is, if I
search this table and get the results back in about 6-7 seconds is that
pretty good, not so good...? What are the things I should look at in
determining the performance on this?
BTW, forgive the lack of information. I'd be happy to post more info on the
table, hardware, etc. I just didn't want to overwhelm the initial question.
Thanks,
Derek Hamilton
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andrew Biagioni | 2003-04-18 18:33:23 | Re: deadlock detection |
| Previous Message | Stephan Szabo | 2003-04-18 05:51:47 | Re: change field type and length |