From: | Chris Albertson <chrisalbertson90278(at)yahoo(dot)com> |
---|---|
To: | Patrick Clery <patrick(at)phpforhire(dot)com>, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
Cc: | Pg_sphere development <pgsphere-dev(at)gborg(dot)postgresql(dot)org>, Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [Pgsphere-dev] GIST index concurrency concern |
Date: | 2004-11-10 18:21:29 |
Message-ID: | 20041110182129.85848.qmail@web41305.mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > > I expect my site to sustain something around 1000-3000 new user
> > > acquisitions per day, all of which will account for an insert
> into 3
> > > GIST indices.
Most people when they talk about a large load on a DBMS system
talk about "transactins per second". As in "100 per second"
Even if we only assume 12 hour days, 3000 per day is only one
transaction every 14 seconds. That's a triveal rate that
could be handled on an older Pentium II PC. Assume the
system runs for five years at 3000/day. That's only only
about 500,000 rows. In database terms that's not much. Don't
worry you have a problem well within the limits of a small PC
runnig PostgreSQL.
You want to of course place the intire process of adding a
new user inside a begin/commit transaction. This will provide
the type of "queue" you want. All of the inserts will get done
when the "commit" happens. Also you will likely want to run the
user interface in its own process or thread. Those two things
will be all you need as long as your average transaction rate
remains so low. If there are ANY locks done in your code you
need to remove them and re-think the design.
Everyone always thinks they have a "large" database project.
Even a 200,000 row table is small enough that it and its index
files can be cached in RAM.
Where you might run into the kinds of problems you are thinking about
is if you had automated sensor systems (looking either down at
the Earth or up at the sky) and software to automatically
extract features and catlog those in to a DBMS. Then if you
have several of those sensors running you get to the high
rates that drive concurrentcy issues. But if you only have four
or five users each doing a transaction per second it's not an
issue. After you get past the 100 transacton per second rates
you are looking at Ocacle on Sun hardware and terrabyte sized
disk arrays Like we have down in the lab here. BUt belleive my
you need automated data collection systems to gemerate enough
data to get you into trouble But I run low-end
stuff on my very old 500Mhz PIII
=====
Chris Albertson
Home: 310-376-1029 chrisalbertson90278(at)yahoo(dot)com
Cell: 310-990-7550
Office: 310-336-5189 Christopher(dot)J(dot)Albertson(at)aero(dot)org
KG6OMK
__________________________________
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | John Hansen | 2004-11-10 20:10:03 | Re: CREATE or REPLACE function pg_catalog.* |
Previous Message | Tom Lane | 2004-11-10 18:04:36 | Re: CREATE or REPLACE function pg_catalog.* |