From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | "surabhi(dot)ahuja" <surabhi(dot)ahuja(at)iiitb(dot)ac(dot)in>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: insertion becoming slow |
Date: | 2005-09-26 19:42:40 |
Message-ID: | 1127763760.29347.5.camel@state.g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 2005-09-26 at 12:58, Jim C. Nasby wrote:
> On Mon, Sep 26, 2005 at 05:55:18PM +0530, surabhi.ahuja wrote:
> > i have seen that after insertion say (20 k rows) the insertion to tables becomes slow..why is it?
>
> Most likely due to indexes.
>
> > is there any way in which u can stop the performance from degrading
>
> If you're loading from scratch, don't create the indexes until after the
> load is done.
And don't forget the corallary, don't analyze an empty table then insert
thousands of rows.
(I know Jim knows this, this is for surabhi)
Let's say we do:
truncate table;
analyze table; (alternately, analyze parent table)
insert into table ... (repeat 20,000 times)
And that this table has fk references or check constraints that need to
be checked on insert. Now the query planner looks at the stats, says,
they table has only 1 or so rows, so I'll sequentially scan it for
matches.
Well, it did have 0 or 1 rows when we started, but somewhere along the
line, as we approached 20,000 rows, the planner needed to switch to an
index scan.
The simple fix is:
Don't analyze an empty table before insertion.
Slightly less simple fix:
analyze your table every 1,000 or so inserts, especially at the
beginning.
From | Date | Subject | |
---|---|---|---|
Next Message | Don Isgitt | 2005-09-26 19:44:42 | Re: Index use in BETWEEN statement... |
Previous Message | Yonatan Ben-Nes | 2005-09-26 19:38:46 | Re: Slow search.. quite clueless |