From: | Brian Cox <brian(dot)cox(at)ca(dot)com> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | error updating a very large table |
Date: | 2009-04-15 00:41:24 |
Message-ID: | 49E52D34.4080200@ca.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
ts_defect_meta_values has 460M rows. The following query, in retrospect
not too surprisingly, runs out of memory on a 32 bit postgres:
update ts_defect_meta_values set ts_defect_date=(select ts_occur_date
from ts_defects where ts_id=ts_defect_id)
I changed the logic to update the table in 1M row batches. However,
after 159M rows, I get:
ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of
8192 bytes at block 7621407
A df run on this machine shows plenty of space:
[root(at)rql32xeoall03 tmp]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 276860796 152777744 110019352 59% /
/dev/sda1 101086 11283 84584 12% /boot
none 4155276 0 4155276 0% /dev/shm
The updates are done inside of a single transaction. postgres 8.3.5.
Ideas on what is going on appreciated.
Thanks,
Brian
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2009-04-15 00:54:31 | Re: INSERT times - same storage space but more fields -> much slower inserts |
Previous Message | Tom Lane | 2009-04-15 00:40:09 | Re: INSERT times - same storage space but more fields -> much slower inserts |