From: | hubert depesz lubaczewski <depesz(at)depesz(dot)com> |
---|---|
To: | Len Walter <len(dot)walter(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Commit every N rows in PL/pgsql |
Date: | 2010-05-26 10:17:04 |
Message-ID: | 20100526101704.GA22434@depesz.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, May 26, 2010 at 04:27:22PM +1000, Len Walter wrote:
> Hi,
>
> I need to populate a new column in a Postgres 8.3 table. The SQL would be
> something like "update t set col_c = col_a + col_b". Unfortunately, this
> table has 110 million rows, so running that query runs out of memory.
> In Oracle, I'd turn auto-commit off and write a pl/sql procedure that keeps
> a counter and commits every 10000 rows (pseudocode):
>
> define cursor curs as select col_a from t
> while fetch_from_cursor(curs) into a
> update t set col_c = col_a + col_b where col_a = a
> i++
> if i > 10000
> commit; i=0;
> end if;
> commit;
you can't do it easily with plpgsql because plpgsql cannot influence
transactions.
what you can do is to use some client (like psql) and make it simply
issue a lot of queries.
for example. let's assume your table t has column id, which is primary
key and contains values from 1 to 100000.
now you can:
perl -e 'for ($i=1; $i<100000; $i+=1000) {printf "update t set col_c = col_a + col_b where col_a = a and id between %u and %u;\n", $i, $i+999}' | psql -U ... -d ...
Best regards,
depesz
--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk: depesz(at)depesz(dot)com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
From | Date | Subject | |
---|---|---|---|
Next Message | Cédric Villemain | 2010-05-26 11:05:52 | Re: effective_io_concurrency details |
Previous Message | pasman pasmański | 2010-05-26 09:49:23 | effective_io_concurrency details |