From: | Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com> |
---|---|
To: | Brian Cox <brian(dot)cox(at)ca(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: error updating a very large table |
Date: | 2009-04-15 11:19:11 |
Message-ID: | 2f4958ff0904150419i7a8c2461m5c74acc508a0cfa6@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Apr 15, 2009 at 1:41 AM, Brian Cox <brian(dot)cox(at)ca(dot)com> wrote:
> ts_defect_meta_values has 460M rows. The following query, in retrospect not
> too surprisingly, runs out of memory on a 32 bit postgres:
>
> update ts_defect_meta_values set ts_defect_date=(select ts_occur_date from
> ts_defects where ts_id=ts_defect_id)
>
> I changed the logic to update the table in 1M row batches. However, after
> 159M rows, I get:
>
> ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of 8192
> bytes at block 7621407
>
> A df run on this machine shows plenty of space:
>
> [root(at)rql32xeoall03 tmp]# df
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/sda2 276860796 152777744 110019352 59% /
> /dev/sda1 101086 11283 84584 12% /boot
> none 4155276 0 4155276 0% /dev/shm
>
> The updates are done inside of a single transaction. postgres 8.3.5.
>
> Ideas on what is going on appreciated.
>
any triggers on updated table ?
as for the update query performance, try different way of doing it:
update foo set bar=x.z FROM foo2 WHERE foo.z=bar.sadfasd;
--
GJ
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Wakeling | 2009-04-15 11:50:51 | Re: INSERT times - same storage space but more fields -> much slower inserts |
Previous Message | Peeyush | 2009-04-15 10:57:50 | need information |