From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: drop/truncate table sucks for large values of shared buffers |
Date: | 2015-06-30 04:14:51 |
Message-ID: | CAA4eK1LVuRAqR3PF4QOC_BOQ-Saxv=7eXTANCgjWfjzZoocAsQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jun 29, 2015 at 5:41 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>
> On Sun, Jun 28, 2015 at 9:05 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> > > On Sat, Jun 27, 2015 at 7:40 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > >> I don't like this too much because it will fail badly if the caller
> > >> is wrong about the maximum possible page number for the table, which
> > >> seems not exactly far-fetched. (For instance, remember those kernel
bugs
> > >> we've seen that cause lseek to lie about the EOF position?)
> >
> > > Considering we already have exclusive lock while doing this operation
> > > and nobody else can perform write on this file, won't closing and
> > > opening it again would avoid such problems.
> >
> > On what grounds do you base that touching faith?
>
> Sorry, but I don't get what problem do you see in this touching?
>
On again thinking about it, I think you are worried that if we close
and reopen the file, it would break any flush operation that could
happen in parallel to it via checkpoint. Still I am not clear that do
we want to assume that we can't rely on lseek for the size of file
if there can be parallel write activity on file (even if that write doesn't
increase the size of file)?
If yes, then we have below options:
a. Have some protection mechanism for File Access (ignore error
for file not present or accessed during flush) and clean the buffers
buffers containing invalid objects as is being discussed up-thread.
b. Use some other API like stat to get the size of file?
Do you prefer any of these or if you have any better idea, then please
do share the same?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2015-06-30 04:52:40 | Re: LWLock deadlock and gdb advice |
Previous Message | Amit Kapila | 2015-06-30 04:02:05 | Re: drop/truncate table sucks for large values of shared buffers |