| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Josh Berkus <josh(at)agliodbs(dot)com> |
| Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Return of the Solaris vacuum polling problem -- anyone remember this? |
| Date: | 2010-08-18 19:07:59 |
| Message-ID: | 8807.1282158479@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Josh Berkus <josh(at)agliodbs(dot)com> writes:
>> Rather, what you need to be thinking about is how
>> come vacuum seems to be making lots of pages dirty on only one of these
>> machines.
> This is an anti-wraparound vacuum, so it could have something to do with
> the hint bits. Maybe it's setting the freeze bit on every page, and
> writing them one page at a time?
That would explain all the writes, but it doesn't seem to explain why
your two servers aren't behaving similarly.
> Still don't understand the call to pollsys, even so, though.
Most likely that's the libc implementation of the select()-based sleeps
for vacuum_cost_delay. I'm still suspicious that the writes are eating
more cost_delay points than you think.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2010-08-18 19:23:02 | Re: trace_recovery_messages |
| Previous Message | Josh Berkus | 2010-08-18 19:02:34 | Re: Return of the Solaris vacuum polling problem -- anyone remember this? |