From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: small parallel restore optimization |
Date: | 2009-03-06 17:20:12 |
Message-ID: | 428.1236360012@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Here's a little optimization for the parallel restore code, that
> inhibits reopening the archive file unless the worker will actually need
> to read from the file (i.e. a data member). It seems to work OK on both
> Linux and Windows, and I propose to apply it in a day or two.
I think you should close the file immediately at fork if you're not
going to reopen it --- otherwise it's a foot-gun waiting to fire.
IOW, not this, but something more like
if (te->section == SECTION_DATA)
(AH->ReopenPtr) (AH);
else
(AH->ClosePtr) (AH);
... worker task ...
if (te->section == SECTION_DATA)
(AH->ClosePtr) (AH);
> I've seen a recent error that suggests we are clobbering memory
> somewhere in the parallel code, as well as Olivier Prennant's reported
> error that suggests the same thing, although I'm blessed if I can see
> where it might be. Maybe some more eyeballs on the code would help.
Can you put together even a weakly reproducible test case? Something
that only fails every tenth or hundredth time would still help.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kedar Potdar | 2009-03-06 17:30:31 | Re: Writing values to relation using bytearray ... |
Previous Message | Andrew Dunstan | 2009-03-06 17:01:36 | small parallel restore optimization |