From: | Venkatesh Babu <venkatbabukr(at)yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Update command too slow |
Date: | 2005-02-08 10:13:09 |
Message-ID: | 20050208101309.83070.qmail@web50407.mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
Thanks for providing info... I tried disabling
autocommit, as suggested by Mr. Greg Stark, I tried
issuing the command "set autocommit to off", but got
the following error message:
ERROR: SET AUTOCOMMIT TO OFF is no longer supported
Also, I can't implement the suggestions of Mr.
Christopher Browne, because I'm not working with
database directly. There is an abstract layer built
over the database. This abstract layer provides an
interface between application objects and data tables
corresponding to those objects. Our application is
developed over this abstract layer. Infact, we are
using "Collection" datatype provided by this layer.
Collection is similar to java vectors in that it can
store any kind of persistable objects, also it
implements the save method (which updates the tables
corresponding to each object present in the
collection), hence one update statement generated per
object present in the collection.
all i can do is to play with indexes for the tables or
change postgres settings. I hope the problem is clear
now... Also, the suggestions of Mr. Tom Lane on
transaction blocking and making use of prepared
statements and indexes on primary have been taken care
of.... i forgot to mention that even though i deleted
all indexes, i ensured that the index on primary key
is not deleted....
to give more background information, we've migrated
the database from db2 to postgres.... things were fine
in db2... is this migration having any effect on the
poor performance of updates (i mean to say is this
problem happening due to some improper migration???)
Thanks,
Venkatesh
--- Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Venkatesh Babu <venkatbabukr(at)yahoo(dot)com> writes:
> > There aren't any triggers but there are 75262
> update
> > statements. The problem is that we have a datatype
> > called as "Collection" and we are fetching the
> data
> > rows into it, modifying the data and call
> > Collection.save(). This save method generates one
> > update satement per record present in it.
>
> Well, that's going to be dog-slow in any case
> compared to putting the
> logic on the server side, but a couple of things you
> could possibly
> do: make sure all of this is in one transaction
> block (a commit per
> row updated is a lot of overhead) and use a prepared
> statement for the
> UPDATE to get you out from under the repeated
> parse/plan overhead.
> Check the UPDATE's plan, too, and make sure it's an
> indexscan on the
> primary key rather than anything less efficient.
>
> regards, tom lane
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose
> an index scan if your
> joining column's datatypes do not match
>
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Surabhi Ahuja | 2005-02-08 10:17:23 | Re: |
Previous Message | Jan Poslusny | 2005-02-08 10:00:57 | Re: |