From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: unsafe use of hash_search(... HASH_ENTER ...) |
Date: | 2005-05-28 16:46:29 |
Message-ID: | 16797.1117298789@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu> writes:
> Consider the senario like this:
> Backends register some dirty segments in BgWriterShmem->requests; bgwrite
> will AbsorbFsyncRequests() asynchornously but failed to record some one in
> pendingOpsTable due to an "out of memory" error. All dirty segments
> remembered in "requests" after this one will not have chance be absorbed by
> bgwriter.
So really we have to PANIC if we fail to record a dirty segment. That's
a bit nasty, but since the hashtable is so small (only 16 bytes per
gigabyte-sized dirty segment) it seems unlikely that the situation will
ever occur in practice.
I'll put a critical section around it --- seems the easiest way to
ensure a panic ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2005-05-28 17:03:52 | Re: [HACKERS] patches for items from TODO list |
Previous Message | Jaime Casanova | 2005-05-28 16:38:43 | Re: thw rewriter and default values, again |