From: | "Rick Gigger" <rick(at)alpinenetworking(dot)com> |
---|---|
To: | "PgSQL General ML" <pgsql-general(at)postgresql(dot)org> |
Subject: | performance problem |
Date: | 2003-11-18 20:43:06 |
Message-ID: | 01b201c3ae14$92bd3c00$0700a8c0@trogdor |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
I am currently trying to import a text data file without about 45,000
records. At the end of the import it does an update on each of the 45,000
records. Doing all of the inserts completes in a fairly short amount of
time (about 2 1/2 minutes). Once it gets to the the updates though it slows
to a craw. After about 10 minutes it's only done about 3000 records.
Is that normal? Is it because it's inside such a large transaction? Is
there anything I can do to speed that up. It seems awfully slow to me.
I didn't think that giving it more shared buffers would help but I tried
anyway. It didn't help.
I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
of stuff but it didn't speed up the updates at all.
I am using a dual 800mhz xeon box with 2 gb of ram. I've tried anywhere
from about 16,000 to 65000 shared buffers.
What other factors are involved here?
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Mascari | 2003-11-18 21:03:13 | Re: performance problem |
Previous Message | Karsten Hilbert | 2003-11-18 20:22:56 | Re: uploading files |
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Mascari | 2003-11-18 20:54:09 | Re: SQL text of view |
Previous Message | Doug McNaught | 2003-11-18 20:26:36 | Re: Better Unilization of Memory |