Re: Slow transfer speeds

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: hansell baran <hansellb(at)yahoo(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow transfer speeds
Date: 2006-08-07 18:05:45
Message-ID: 1154973945.20252.18.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, 2006-08-07 at 12:26, hansell baran wrote:
> Hi. I'm new at using PostgreSQL. I have found posts related to this
> one but there is not a definite answer or solution. Here it goes.
> Where I work, all databases were built with MS Access. The Access
> files are hosted by computers with Windows 2000 and Windows XP. A new
> server is on its way and only Open Source Software is going to be
> installed. The OS is going to be SUSE Linux 10.1 and we are making
> comparisons between MySQL, PostgreSQL and MS Access. We installed
> MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL
> DO NOT run at the same time)(There is one HDD for Windows and one for
> Linux)
> The "Test Server" in which we install the DBMS has the following
> characteristics:
>
> CPU speed = 1.3 GHz
> RAM = 512 MB
> HDD = 40 GB

Just FYI, that's not only not much in terms of server, it's not even
much in terms of a workstation. My laptop is about on par with that.

Just sayin.

OK, just so you know, you're comparing apples and oranges. A client
side application like access has little or none of the overhead that a
real database server has.

The advantage PostgreSQL has is that many people can read AND write to
the same data store simultaneously and the database server will make
sure that the underlying data in the files never gets corrupted.
Further, with proper constraints in place, it can make sure that the
data stays coherent (i.e. that data dependencies are honored.)

As you can imagine, there's gonna be some overhead there. And it's
wholly unfair to compare a databases ability to stream out data in a
single read to access. It is the worst case scenario.

Try having 30 employees connect to the SAME access database and start
updating lots and lots of records. Have someone read out the data while
that's going on. Repeat on PostgreSQL.

If you're mostly going to be reading data, then maybe some intermediate
system is needed, something to "harvest" the data into some flat files.

But if your users need to read out 500,000 rows, change a few, and write
the whole thing back, your business process is likely not currently
suited to a database and needs to be rethought.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Markus Schaber 2006-08-07 18:25:10 Re: [PERFORM] 7.3.2 pg_restore very slow
Previous Message Saranya Sivakumar 2006-08-07 17:28:25 Re: [PERFORM] 7.3.2 pg_restore very slow