From: | "Hiroshi Inoue" <Inoue(at)tpf(dot)co(dot)jp> |
---|---|
To: | "Bruce Momjian" <maillist(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "PostgreSQL-development" <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | RE: [HACKERS] tables > 1 gig |
Date: | 1999-06-18 01:52:10 |
Message-ID: | 000301beb92d$2df2b420$2801007e@cadzone.tpf.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>
> > > I haven't been paying much attention, but I imagine that what's really
> > > going on here is that once vacuum has collected all the still-good
> > > tuples at the front of the relation, it doesn't bother to go through
> > > the remaining blocks of the relation and mark everything dead therein?
> > > It just truncates the file after the last block that it put
> tuples into,
> > > right?
> > >
> > > If this procedure works correctly for vacuuming a simple one-segment
> > > table, then it would seem that truncation of all the later segments to
> > > zero length should work correctly.
> >
> > Not sure about that. When we truncate single segment file, the table is
> > being destroyed, so we invalidate it in the catalog cache and tell other
> > backends. Also, we have a problem with DROP TABLE in a transaction
> > while others are using it as described by a bug report a few days ago,
> > so I don't think we have that 100% either.
> >
The problem is that (virtual) file descriptors,relcache entries ... etc
are local to each process. I don't know the certain way to tell other
processes just in time that target resources should be invalidated.
> > That is interesting. I never thought of that. Hiroshi, can you test
> > that idea? If it is the non-existance of the file that other backends
> > are checking for, my earlier idea of rename() with truncated file kept
> > in place may be better.
> >
> > Also, I see why we are not getting more bug reports. They only get this
> > when the table looses a segment, so it is OK to vacuum large tables as
> > long as the table doesn't loose a segment during the vacuum.
>
> OK, this is 100% wrong. We truncate from vacuum any time the table size
> changes, and vacuum of large tables will fail even if not removing a
> segment. I forgot vacuum does this to reduce disk table size.
>
> I wonder if truncating a file to reduce its size will cause other table
> readers to have problems.
Current implementation has a hidden bug.
Once the size of a segment reached RELSEG_SIZE,mdnblocks()
wouldn't check the real size of the segment any more.
I'm not sure such other bugs doesn't exist any more.
It's one of the reason why I don't recommend to apply my trial patch
to mdtruncate().
> I though vacuum had an exlusive lock on the
> table during vacuum, and if so, why are other backends having troubles?
>
We could not see any errors by unlinking segmented relations when
commands are executed sequentailly.
Vacuum calls RelationInvalidateHeapTuple() for a pg_class tuple and
and other backends could recognize that the relcachle entry must be
invalidated while executing StartTransaction() or CommandCounter
Increment().
Even though the target relation is locked exclusively by vacuum,other
backends could StartTransaction(),CommandCounterIncrement(),
parse,analyze,rewrite,optimize,start Executor Stage and open relations.
We could not rely on exclusive lock so much.
Regards.
Hiroshi Inoue
Inoue(at)tpf(dot)co(dot)jp
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 1999-06-18 01:55:21 | Re: [HACKERS] SCO FAQ changes |
Previous Message | Hiroshi Inoue | 1999-06-18 01:51:38 | RE: [HACKERS] tables > 1 gig |