From: | Joseph Shraibman <jks(at)selectacast(dot)net> |
---|---|
To: | "Creager, Robert S" <CreagRS(at)LOUISVILLE(dot)STORTEK(dot)COM> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: SELECT performance drop v 6.5 -> 7.0.3 |
Date: | 2001-03-08 03:56:40 |
Message-ID: | 3AA702F8.1EA531DF@selectacast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Creager, Robert S" wrote:
>
> I've a question. I have often seen the 'trick' of dropping an index,
> importing large amounts of data, then re-creating the index to speed the
> import. The obvious problem with this is during the time from index drop to
> the index finishing re-creation, a large db is going to be essentially
> worthless to queries which use those indexes. I know nothing about the
> backend and how it does 'stuff', so I may be asking something absurd here.
> Why, when using transactions, are indexes updated on every insert? It seems
> logical (to someone who doesn't know better), that the indexes could be
> updated on the COMMIT.
>
> Please don't hurt me too bad...
> Rob
>
I imagine because the transaction might do a select on data it just
inserted/updated.
--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff | 2001-03-08 04:02:10 | How to check if a database exists |
Previous Message | Thomas Nagy | 2001-03-08 01:17:03 | Paradox, dbf and PostgreSQL ? |