Re: Postgres is using 100% CPU

From: Ashik S L <ashiksl178(at)gmail(dot)com>
To: Yves Dorfsman <yves(at)zioup(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres is using 100% CPU
Date: 2015-06-01 05:38:33
Message-ID: CAC=-3Dd4uMqK-JuuaZ72T+BQSWAr_ZpjGbpBZk_G-hFz0G9MgA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-performance

> On 05/30/2015 09:46 AM, Ashik S L wrote:
>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I was inserting a lot of rows of date (millions of rows)
> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
> SCSI hard drives and 8 GBytes of ram. Each insert was a separate
> transaction.
>
> When I bunched up lots of rows (thousaands) into a single transaction,
> the whole thing took less than an hour.

Or use copy, \copy if possible, or a "temporary" unlogged table to copy from
later, etc...

> Is it possible that when you insert 273 rows at once, you are doing it
> as 273 transactions instead of one?

>That's the thing, even on an old laptop with a slow IDE disk, 273
individual
>inserts should not take more than a second.

We are inserting 273 rows at once and its taking less than 1 second. But we
will be updating bunch of 273 rows every time which is taking high cpu.
Its like updating 273 rows 2000 to 3000 times. We will be running multiple
instances of postgres as well.

On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman <yves(at)zioup(dot)com> wrote:

> On 2015-05-31 07:04, Jean-David Beyer wrote:
> > On 05/30/2015 09:46 AM, Ashik S L wrote:
> >> We are using postgres SQL version 8.4.17..
> >> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
> >> each row is of 60 bytes.Every time we insert 16380 bytes of data.
> >
> > Way back when, I was inserting a lot of rows of date (millions of rows)
> > and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
> > SCSI hard drives and 8 GBytes of ram. Each insert was a separate
> > transaction.
> >
> > When I bunched up lots of rows (thousaands) into a single transaction,
> > the whole thing took less than an hour.
>
> Or use copy, \copy if possible, or a "temporary" unlogged table to copy
> from
> later, etc...
>
> > Is it possible that when you insert 273 rows at once, you are doing it
> > as 273 transactions instead of one?
>
> That's the thing, even on an old laptop with a slow IDE disk, 273
> individual
> inserts should not take more than a second.
>
> --
> http://yves.zioup.com
> gpg: 4096R/32B0F416
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Michael Paquier 2015-06-01 05:49:46 contribcheck and modulescheck of MSVC's vcregress.pl cannot work independently
Previous Message Yves Dorfsman 2015-05-31 14:23:10 Re: Postgres is using 100% CPU

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2015-06-01 13:20:45 Re: Postgres is using 100% CPU
Previous Message Tomas Vondra 2015-05-31 16:39:10 Re: Different plan for very similar queries