From: | Dan Armbrust <daniel(dot)armbrust(dot)list(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Slow Inserts on 1 table? |
Date: | 2005-08-02 15:31:29 |
Message-ID: | 42EF91D1.5030104@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>> My loading is done programatically, from another format, so COPY is
>> not an option.
>
>
> Why not? A lot of my bulk-loads are generated from other systems and I
> go through a temporary-file/pipe via COPY when I can. When I don't I
> block inserts into groups of e.g. 1000 and stick in an analyse/etc as
> required.
I guess I should clarify - my inserts are done by a Java application
running on a client machine. This isn't bulk load in the normal definition.
I don't have any problem with the speed of the inserts when they are
working correctly. The only problem is that the query analyzer is
making a really poor decision when it is executing insert statements on
tables that have foreign keys.
>
> So run ANALYSE in parallel with your load, or break the bulk-load into
> blocks and analyse in-line. I'm not sure ripping out PG's cost-based
> query analyser will be a popular solution just to address bulk-loads.
I never suggested that it needed to be ripped out. It just seems that
when it is looking to check foreign keys, and the statistics are not up
to date (or have not yet been created) it should default to using the
indexes, rather than not using the indexes. The time savings of using
indexes when things are big is FAR bigger than the time savings of not
using indexes when things are small.
Dan
--
****************************
Daniel Armbrust
Biomedical Informatics
Mayo Clinic Rochester
daniel.armbrust(at)mayo.edu
http://informatics.mayo.edu/
From | Date | Subject | |
---|---|---|---|
Next Message | Dan Armbrust | 2005-08-02 15:41:01 | Re: Slow Inserts on 1 table? |
Previous Message | Alvaro Herrera | 2005-08-02 15:16:16 | Re: Slow Inserts on 1 table? |