Configuration tips for very large database

From: Nico Sabbi <nicola(dot)sabbi(at)poste(dot)it>
To: pgsql-performance(at)postgresql(dot)org
Subject: Configuration tips for very large database
Date: 2015-02-12 22:25:54
Message-ID: 54DD2872.3070603@poste.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello,
I've been away from postgres for several years, so please forgive me if
I forgot nearly everything:-)

I've just inherited a database collecting environmental data. There's a
background process continually inserting records (not so often, to say
the truth) and a web interface to query data.
At the moment the record count of the db is 250M and growing all the
time. The 3 main tables have just 3 columns.

Queries get executed very very slowly, say 20 minutes. The most evident
problem I see is that io wait load is almost always 90+% while querying
data, 30-40% when "idle" (so to say).
Obviously disk access is to blame, but I'm a bit surprised because the
cluster where this db is running is not at all old iron: it's a vmware
VM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which used). The disk
system underlying vmware is quite powerful, this postgres is the only
system that runs slowly in this cluster.
I can increase resources if necessary, but..

Even before analyzing queries (that I did) I'd like to know if someone
has already succeeded in running postgres with 200-300M records with
queries running much faster than this. I'd like to compare the current
configuration with a super-optimized one to identify the parameters that
need to be changed.
Any link to a working configuration would be very appreciated.

Thanks for any help,
Nico

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2015-02-12 22:38:01 Re: Configuration tips for very large database
Previous Message David G Johnston 2015-02-12 22:16:06 Re: query - laziness of lateral join with function