From: | Ferdinand Smit <ferdinand(at)telegraafnet(dot)nl> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: INSERT performace. |
Date: | 2002-01-30 09:42:36 |
Message-ID: | 200201300942.g0U9gm509158@server9.telegraafnet.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
> > > > > We have a fresh database and have begun to observe performance
> > > > > degradation for INSERTs as a table went from empty to
> > > > > 100,000-ish rows. Initial INSERTs were sub second while after
> > > > > 30k rows, they were 1-3 seconds.
> > > >
> > > > we just hit this problem when moving old database to new one
> > > > (new schema). we had to insert approx. 1.5 million rows, and
> > > > from initial 0.02s/insert after several thousands of inserts it
> > > > came to 20 seconds per insert. what we did was removing foreign
> > > > keys from table which we made inserts to. it helped. we manage
> > > > to put 300k records in ca. 2-3 hours.
> > >
> > > If possible, use the COPY command. We did 90K rows in about 40sec
> > > using this puppy on a Solaris U5 (took over 130sec for MySQL on
> > > the same box in case the performance geeks in the crowd are
> > > interested).
> >
> > We were transfering a mysql-database to a new linux-server (PIII-800
> > dual). I don't now how mutch rows, but the dump was 8 Gb (not
> > zipped). It took us 4 hours to import, and 5 hours to create the
> > indexes.
>
> How were you inserting the data? Were you doing multiple inserts per
> transactions? Copy? That sounds really slow to me.
The database mainly contains integers, which represent the behavior of
internet users on our site, so it's very compact data.
We used multiple insert with mysql but i did'nt find that option in postgres.
> > By testing we created a postgres database to on an other server
> > (same type). The copy command did'nt work, because of 'strange
> > characters', so we used normal inserts. It took us 12 hours to
> > import, and 10 hours to create the indexes.
>
> Have you tried to escape the data before you inserted it? That
> should've solve things.
No, how do you do that ?
> http://www.postgresql.org/idocs/index.php?sql-copy.html
>
> > Although, i like postgres more, mysql is still faster with hugh
> > (simple) data.
>
> I've never found that to be the case in only a few instances
> actually... and typically with small data sets that are less than 1M
> rows. vacuum analyze and turn fsync off. :~) -sc
Of course, fsync was off and i increased the memory usage. Vacuum is not
usable with a total import ;-)
Ferdinand
From | Date | Subject | |
---|---|---|---|
Next Message | Raphael Bauduin | 2002-01-30 10:32:28 | Re: OT: Decent GUI-based reportwriter for Linux? |
Previous Message | Brian McCane | 2002-01-30 09:13:51 | Listing Triggers |