From: | Shridhar Daithankar <shridhar(at)frodo(dot)hserus(dot)net> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WAL write of full pages |
Date: | 2004-03-16 14:09:20 |
Message-ID: | 40570A90.1040708@frodo.hserus.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian wrote:
> Shridhar Daithankar wrote:
>>I can not see why writing an 8K block is any more safe than writing just the
>>changes.
>>
>>I may be dead wrong but just putting my thoughts together..
> The problem is that we need to record what was on the page before we
> made the modification because there is no way to know that a write
> hasn't corrupted some part of the page.
OK... I think there is hardly any way around the fact that we need to flush a
page the way we do it now. But that is slow. So what do we do.
How feasible it would be to push fsyncing those pages/files to background writer
and have it done on priority? That way the disk IO wait could get out of
critical execution path. May be that could yield the performance benefit we are
looking for.
Also just out of curiosity. Is it possbile that more than one transaction grab
hold of different pages of WAL and start putting data to it simaltenously? In
such a case a single fsync could do the job for more than one backend but
replaying WAL would be akin to defragging a FAT partition..
Just a thought..
Shridhar
From | Date | Subject | |
---|---|---|---|
Next Message | Dennis Haney | 2004-03-16 14:14:48 | Re: WAL write of full pages |
Previous Message | Bruce Momjian | 2004-03-16 13:44:12 | Re: Feature request: Dumping multiple tables at one step |