From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Decibel! <decibel(at)decibel(dot)org> |
Cc: | "Richard Huxton" <dev(at)archonet(dot)com>, "Mark Makarowsky" <bedrockconstruction(at)yahoo(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Update table performance |
Date: | 2007-08-07 19:36:18 |
Message-ID: | dcc563d10708071236v4c79413fpd90a4077ad93eac9@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 8/7/07, Decibel! <decibel(at)decibel(dot)org> wrote:
> On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
> > Mark Makarowsky wrote:
> > >I have a table with 4,889,820 records in it. The
> > >table also has 47 fields. I'm having problems with
> > >update performance. Just as a test, I issued the
> > >following update:
> > >
> > >update valley set test='this is a test'
> > >
> > >This took 905641 ms. Isn't that kind of slow?
> >
> > The limiting factor here will be how fast you can write to your disk.
>
> Well, very possibly how fast you can read, too. Using your assumption of
> 1k per row, 5M rows means 5G of data, which might well not fit in
> memory. And if the entire table's been updated just once before, even
> with vacuuming you're now at 10G of data.
Where one might have to update just one column of a wide table often,
it's often a good idea to move that column into its own dependent
table.
Or just don't update one column of every row in table...
From | Date | Subject | |
---|---|---|---|
Next Message | Decibel! | 2007-08-07 21:42:28 | Re: Update table performance |
Previous Message | Chris Browne | 2007-08-07 18:03:06 | Re: Update table performance |