Re: pg_xlog size growing untill it fills the partition

From: Michal TOMA <mt(at)sicoop(dot)com>
To: pgsql-general(at)postgresql(dot)org
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Marcin Mańk <marcin(dot)mank(at)gmail(dot)com>
Subject: Re: pg_xlog size growing untill it fills the partition
Date: 2013-10-07 18:44:28
Message-ID: 201310072044.31677.mt@sicoop.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I gave it in my first post. It is a software raid 1 of average 7200 rpm disks
(Hitachi HDS723020BLE640) for the main tablespace and a software raid 1 of
SSDs for onother tablespace and alos the partition holding the pg_xlog
directory.
The problem is not the workload as the application is a web crawler. So the
workload can be infinite. What I would expect Postgres to do is to regulate
te workload somehow insetad of just crashing twice a day with a "partition
full" followed by automatic recovery.
If the workload is too high to handle, the query response time should go down.
This would be perfectly acceptable for my application and this is in fact the
setting I'm trying to find. What I have now is a very (too) good response
time for 10 hours followed by a crash.

On Monday 07 October 2013 18:07:57 Jeff Janes wrote:
> On Mon, Oct 7, 2013 at 6:23 AM, Marcin Mańk <marcin(dot)mank(at)gmail(dot)com> wrote:
> > On Thu, Oct 3, 2013 at 11:56 PM, Michal TOMA <mt(at)sicoop(dot)com> wrote:
> >> This is what I can see in the log:
> >> 2013-10-03 13:58:56 CEST LOG: checkpoint starting: xlog
> >> 2013-10-03 13:59:56 CEST LOG: checkpoint complete: wrote 448 buffers
> >> (0.2%); 0 transaction log file(s) added, 9 removed, 18 recycled;
> >> write=39.144 s, s, sync=12102.311 s, total=12234.608 s; sync files=667,
> >> longest=181.374 s, average=18.144 s
> >>
> >> 2013-10-03 22:30:25 CEST LOG: checkpoint starting: xlog time
> >
> > From your logs, it seems that the writes are spread all over the (fairly
> > large) database. Is that correct? What is the database size? What is the
> > size of the working data set (i.e. the set of rows that are in use)?
> >
> > I heard of people having good results with setting a low value for
> > shared_buffers (like 128MB) in a high write activity scenarios. Setting
> > it that low would mean that checkpoints would have 16 times less to do.
>
> It looks like most of the actual writing is being done by either the
> background writer or the backends themselves, not the checkpoint. And the
> checkpointer still has to sync all the files, so lowering it further is
> unlikely to help.
>
> I don't think he ever gave us the specs of the RAID is using. My guess is
> that it is way too small for the workload.
>
> Cheers,
>
> Jeff

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Michal TOMA 2013-10-07 19:11:00 Re: Hi, Friends, are there any ETL tools (free or commercial) available for PostgreSQL?
Previous Message Pavel Stehule 2013-10-07 17:52:20 Re: pg_similarity