From: | "Molenda, Mark P" <mark(dot)molenda(at)eds(dot)com> |
---|---|
To: | "Pgsql-Novice (pgsql-novice(at)postgresql(dot)org)" <pgsql-novice(at)postgresql(dot)org> |
Subject: | Problem on Linux |
Date: | 2003-05-22 14:33:30 |
Message-ID: | 424D6EA99E39D4118FA100508BDF097012A8801E@USCHM203 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
I'm migrating tables from Solaris to Linux. Other than Red-Hat moving the
directories a little bit I expected a close match on performance.
It seems that the SUN version (compiled from source) handled 13 million rows
with ( I know this is not efficient ) Select * from tableName;
The Linux box bumped me out of psql with the exact table structure but with
only 4 million rows doing the same select *.
I'm wondering if it is the startup of the system. I had to issue a huge
nohup on the SUN and on linux it has a predefined /etc/rc.d/init.d script.
Linux postgres gurus' - I will have approx. 24 million rows per table
loaded each day. I don't care if it takes 4 hours to complete an sql call
as long as it does complete. I need to keep 90 days of data, so I'm
thinking of using a new (exact duplicate of) table each day for 90 days then
Truncate the oldest and start over.
Is there a better way? And how do I tune the stock RedHat version? I
really don't want to have to load it again from scratch (source).
P.S. I'm going from a 450MHz sparc to a 750MHz compaq with similar amounts
of real and swap memory.
-Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Elphick | 2003-05-22 15:03:54 | Re: timestamp data type? |
Previous Message | Retzlaw Heinrich | 2003-05-22 13:03:31 | psql without password |