From: | Adam Tauno Williams <adamtaunowilliams(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Simple select hangs while CPU close to 100% |
Date: | 2007-07-22 18:32:45 |
Message-ID: | 1185129165.4159.1.camel@aleph.whitemice.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> I’m having this very disturbing problem. I got a table with about
> 100,000 rows in it. Our software deletes the majority of these rows
> and then bulk loads another 100,000 rows into the same table. All this
> is happening within a single transaction. I then perform a simple
> “select count(*) from …” statement that never returns. In the mean
COUNT(*) is always slow; but either way if the process is deleting and
then adding records, can't you just keep track of how may records you
loaded [aka count++] rather than turning around and asking the database
before any statistics have had a chance to be updated.
> time, the backend Postgres process is taking close to 100% of the
> CPU. The hang-up does not always happen on the same statement but
> eventually it happens 2 out of 3 times. If I dump and then restore the
> schema where this table resides the problem is gone until the next
> time we run through the whole process of deleting, loading and
> querying the table.
> There is no other activity in the database. All requested locks are
> granted.
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2007-07-23 08:51:08 | Re: User concurrency thresholding: where do I look? |
Previous Message | Luke Lonergan | 2007-07-22 16:43:47 | Re: Postgres configuration for 64 CPUs, 128 GB RAM... |