Re: Performance large tables.

From: William Yu <wyu(at)talisys(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Performance large tables.
Date: 2005-12-11 15:25:16
Message-ID: dnhgcl$1bhk$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Benjamin Arai wrote:
> For the most part the updates are simple one liners. I currently commit
> in large batch to increase performance but it still takes a while as
> stated above. From evaluating the computers performance during an
> update, the system is thrashing both memory and disk. I am currently
> using Postgresql 8.0.3.
>
> Example command "UPDATE data where name=x and date=y;".

Before you start throwing the baby out with the bathwater by totally
revamping your DB architecture, try some simple debugging first to see
why these queries take a long time. Use explain analyze, test
vacuuming/analyzing mid-updates, fiddle with postgresql.conf parameters
(the wal/commit settings especially). Try using using commit w/
different amounts of transactions -- the optimal # won't be the same
across all development tools.

My own experience is that periodic vacuuming & analyzing are very much
needed for batches of small update commands. For our batch processing,
autovacuum plus 1K-10K commit batches did the trick in keeping
performance up.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Frank van Vugt 2005-12-11 15:31:59 PL/pgSQL : notion of deferred execution
Previous Message Hannes Dorbath 2005-12-11 12:17:42 TSearch2: Auto identify document language?