From: | "Gurupartap Davis" <partap(at)yahoo(dot)com> |
---|---|
To: | "PostgreSQL-General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Optimize for insertions? |
Date: | 2001-11-27 00:06:19 |
Message-ID: | 015b01c176d7$577bec70$0f00a8c0@marlows |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks for the advice, everyone...I've got my mysql dump files (about 300
50MB files) inserting at a rate of about 1 minute each now vs. 1.5 hours
each before(!)
I'm worried about creating indexes after all the data is in there,
though...I'm going to need 4-5 indexes. Isn't that going to take a hella
long time? How are indexes created anyhow? I would assume a linear search
through every record in the table...would vacuum help at all? how bout
vacuum analyze?
----- Original Message -----
From: "Gregory Wood" <gregw(at)com-stock(dot)com>
To: "Gurupartap Davis" <partap(at)yahoo(dot)com>
Cc: "PostgreSQL-General" <pgsql-general(at)postgresql(dot)org>
Sent: Monday, November 26, 2001 3:09 PM
Subject: Re: [GENERAL] Optimize for insertions?
> 1. Look into using the COPY command. This allows fast bulk imports.
> 2. Consider tweaking your transactions. If you are not using transactions
> (possible considering you are migrating from MySQL), understand that
> PostgreSQL creates an implicit transaction for each statement, unless an
> explicit transaction is created. That means that you are dealing with the
> additional overhead of 200 million transactions on top of the INSERTs
> themselves. Of course, doing everything inside a single transaction can be
> cumbersome too. Try stuffing a thousand or so INSERTs into a single
> transaction and see if that speeds things up.
> 3. If you created indexes, drop them during your import, and readd them
> after you've finished. Maintaining those slows things down a bit as well.
>
> Hope this helps!
>
> Greg
>
> ----- Original Message -----
> From: Gurupartap Davis
> To: pgsql-general(at)postgresql(dot)org
> Sent: Monday, November 26, 2001 4:17 PM
> Subject: [GENERAL] Optimize for insertions?
>
>
> Hi,
>
> I'm trying to migrate a mysql database to postgresql. I've got the tables
> all set up, but inserting the data from the mysql dumps is taking forever.
> I've got about 200 million rows in the big table (growing by about 1.5
> million per day), but at the 100 inserts/second I'm getting, this will
take
> over 3 weeks. MySQL on the same machine averages about 1100-1200
> inserts/second (no, I'm not running them both at the same time ;-)
>
> Any thoughts on how to tweak the postmaster for quick inserts? I've got
> fsync turned off, but that's about all I've tried so far.
>
> I'm running postgresql 7.1.3 on Linux (Redhat 7.2 with ext3) with a
700MHz
> processor, 256MB RAM, and an 80GB IDE HD
>
> Thanks
> -partap
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
>
From | Date | Subject | |
---|---|---|---|
Next Message | Roderick A. Anderson | 2001-11-27 00:08:51 | Row Limit? |
Previous Message | Tom Lane | 2001-11-26 23:49:59 | Re: Pl/Pgsql triger procedure calls with parameters |