From: | "Creager, Robert S" <CreagRS(at)LOUISVILLE(dot)STORTEK(dot)COM> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | RE: SELECT performance drop v 6.5 -> 7.0.3 |
Date: | 2001-03-07 21:09:23 |
Message-ID: | 10FE17AD5F7ED31188CE002048406DE8514CD4@lsv-msg06.stortek.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I've a question. I have often seen the 'trick' of dropping an index,
importing large amounts of data, then re-creating the index to speed the
import. The obvious problem with this is during the time from index drop to
the index finishing re-creation, a large db is going to be essentially
worthless to queries which use those indexes. I know nothing about the
backend and how it does 'stuff', so I may be asking something absurd here.
Why, when using transactions, are indexes updated on every insert? It seems
logical (to someone who doesn't know better), that the indexes could be
updated on the COMMIT.
Please don't hurt me too bad...
Rob
Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL
> -----Original Message-----
>
> As for the import process taking so long, you might want to try
> turning off fsync during the import. 7.1 improves the fsync
> on performance
> but it's still in beta. Dropping non-required indexes before
> doing the
> import then re-creating them after import will also help speed it up.
> Always make sure you vacuum analyze it after.
>
> Matt
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
http://www.postgresql.org/search.mpl
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Crute | 2001-03-07 21:14:05 | Re: How robust is postgresql ? |
Previous Message | Richard Huxton | 2001-03-07 20:53:02 | CONTRIB: int8 sequence simulator |