From: | Harry Jackson <harryjackson(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Crashing DB or Server? |
Date: | 2005-12-16 13:38:24 |
Message-ID: | 45b42ce40512160538n7b77a2aexe9d13616f6c25a69@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 12/16/05, Moritz Bayer <moritz(dot)bayer(at)googlemail(dot)com> wrote:
> This is really weird, just a few hours ago the machine run very smooth
> serving the data for a big portal.
Can you log the statements that are taking a long time and post them
to the list with the table structures and indexes for the tables being
used.
To do this turn on logging for statements taking a long time, edit
postgresql.conf file and change the following two parameters.
log_min_duration_statement = 2000 # 2 seconds
Your log should now be catching the statements that are slow. Then use
the statements to get the explain plan ie
dbnamr=# explain [sql thats taking a long time]
We would also need to see the table structures.
dbname=# \d [table name of each table in above explain plan]
> Has anybody an idea what might have happened here?
> I need a quick solution, since I'm talking about an live server that should
> be running 24 hours a day.
It may be that the planner has started to pick a bad plan. This can
happen if the database is regularly changing and the stats are not up
to date. I believe it can happen even if the stats are up to date but
is much less likely to do so.
It might also be an idea to vacuum the database.
dbname=# VACUUM ANALYZE;
This will load the server up for a while though.
From | Date | Subject | |
---|---|---|---|
Next Message | Moritz Bayer | 2005-12-16 14:10:51 | Re: Crashing DB or Server? |
Previous Message | Simon Riggs | 2005-12-16 13:17:25 | Re: How much expensive are row level statistics? |