From: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
---|---|
To: | C Pond <cpondwork(at)yahoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Embedded VACUUM |
Date: | 2011-09-05 09:26:54 |
Message-ID: | 4E6495DE.2060601@ringerc.id.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 3/09/2011 8:25 AM, C Pond wrote:
> I'm running a labour-intensive series of queries on a medium-sized dataset (~100,000 rows) with geometry objects and both gist and btree indices.
>
> The queries are embedded in plpgsql, and have multiple updates, inserts and deletes to the tables as well as multiple selects which require the indices to function correctly for any kind of performance.
>
> My problem is that I can't embed a vacuum analyze to reset the indices and speed up processing, and the queries get slower and slower as the un-freed space builds up.
>
> From my understanding, transaction commits within batches are not allowed (so no vacuum embedded within queries). Are there plans to change this? Is there a way to reclaim dead space for tables that have repeated inserts, updates and deletes on them?
Not, AFAIK, until the transaction doing the deletes/updates commits and
so do any older SERIALIZABLE transactions as well as any older running
READ COMMITTED statements.
This is one of the areas where Pg's lack of true stored procedures bites
you. You'll need to do the work via an out-of-process helper over a
regular connection, or do your work via dblink to achieve the same effect.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Shaw | 2011-09-05 10:28:19 | Rather large LA |
Previous Message | Kai Otto | 2011-09-05 09:21:35 | Re: Slow performance |