From: | Andreas Wernitznig <andreas(at)insilico(dot)com> |
---|---|
To: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: low performance |
Date: | 2001-08-21 20:24:12 |
Message-ID: | 20010821222412.2684c069.andreas@insilico.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I am aware of the performance drawbacks because of indices and triggers. In fact I have a trigger and an index on the most populated table.
It is not possible in my case to remove the primary keys during insert, because the database structure and foreign keys validate my data during import.
The problem is, that sometimes the performance is good, and sometimes the database is awfully slow.
If it is slow, postgres is eating up all CPU time and it takes at least 150 times longer to insert the data.
I don't know why and what to do against that.
Andreas
On Mon, 20 Aug 2001 19:39:31 -0400
Jonas Lindholm <jlindholm(at)rcn(dot)com> wrote:
> Do you have any index on the tables ? Any triggers ?
>
> If you want to insert 1 million rows you should drop the index, insert the data and then recreate the index.
> You should also try the COPY command to insert the data.
>
> You should also avoid having anyone to connect to the database when you insert a lot of rows, and 1 million rows are a lot of rows for any database.
>
> I've been able to insert, in one table, 17 million record in ~3 hours on a Compaq SMP 750 Mhz with 512MB
> by dropping the index, using several COPY commands at the same time loading different parts of the data and then creating the index again.
> At the time of the inserts no other processes than the COPY's was connected to the database.
>
> /Jonas Lindholm
>
>
> Andreas Wernitznig wrote:
>
> > I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
> > (The installation of the new 7.1.3 doesn't seem to solve the problem)
> >
> > I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and DBI-1.19).
> > The program inserts some million rows into a db with about 30 tables. The processing takes (if everyting works fine) about 10 hours to complete. Usually the my Perl-Script and the database share the available CPU time 50:50.
> > But sometimes the database is very slow eating up most (>98%) of the available CPU time.
> > (Of cause I know VACUUM and VACUUM ANALYZE, this is not the problem).
> >
> > The only thing that seems to help then, is killing the perl script, stopping postgresql, running "ipcclean", and start again from the beginning. If it works from the beginning, the database is ususally very fast until all data are processed.
> >
> > But if someone else connects (using psql), sometimes the database gets very slow until it is using all the CPU time.
> >
> > There are no error messages at postgres-startup.
> > I already increased the number of buffers to 2048 (doesn't help)
> >
> > I cannot reproduce these problems, sometimes the db is fast, sometimes very slow. The perl script doesn't seem to be the problem, because I wrote all SQL Commands to a file and processed them later ("psql dbname postgres < SQL-File").
> > Same thing: sometimes slow sometimes fast.
> >
> > Andreas
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-08-21 21:38:23 | Re: Re: low performance |
Previous Message | grant | 2001-08-21 15:46:00 | Re: Left Join/Outer Join |