From: | mlw <markw(at)mohawksoft(dot)com> |
---|---|
To: | Hackers List <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Transactions vs speed. |
Date: | 2001-01-14 01:21:57 |
Message-ID: | 3A60FF35.EDE3516C@mohawksoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have a question about Postgres:
Take this update:
update table set field = 'X' ;
This is a very expensive function when the table has millions of rows,
it takes over an hour. If I dump the database, and process the data with
perl, then reload the data, it takes minutes. Most of the time is used
creating indexes.
I am not asking for a feature, I am just musing.
I have a database update procedure which has to merge our data with that
of more than one third party. It takes 6 hours to run.
Do you guys know of any tricks that would allow postgres operate really
fast with an assumption that it is operating on tables which are not
being used. LOCK does not seem to make much difference.
Any bit of info would be helpful.
From | Date | Subject | |
---|---|---|---|
Next Message | Lamar Owen | 2001-01-14 02:58:52 | Re: RPMs (was Re: Re: Beta2 ... ?) |
Previous Message | Nathan Myers | 2001-01-13 23:33:59 | Re: CRCs |