Re: [SQL] Questions about vacuum analyze

From: "Steven M(dot) Wheeler" <swheeler(at)sabre(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-sql(at)hub(dot)org
Subject: Re: [SQL] Questions about vacuum analyze
Date: 1999-10-07 11:29:38
Message-ID: 37FC8422.82295F9A@sabre.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

More information on the continuing vacuum saga:
The latest run after having processed an additional 40K+ msgs (4M+ inserts).
Start: 10/06/1999(at)07:28:07
complete: 10/06/1999(at)21:17:32
About 14 hours.
Two tables.
The currnt table has 39.5M+ rows in it. Select count(*) took 14m41s
The history table only has 190K rows.

I am now running into another problem. I need to do select distinct from the
currnt to get dates for maintenance activity (rows to summarize and move to
history). When ever I do I get this message in my server.log file:

NOTICE: BufFileRead: should have flushed after writing

And I get 0 records selected. Can't find this message in the docs. Any
suggestions? Should I submit this on the SQL list?

Thanks, you're help is greatly appreciated!

--
Steven Wheeler
Mid-Range UNIX Engineering
Sabre Inc.
(918) 292-4119

Tom Lane wrote:

> "Steven M. Wheeler" <swheeler(at)sabre(dot)com> writes:
> > I believe that 6.5.2 and a little space reclamation in my DB directory may
> > have taken care of the vacuum problem. The last vacuum ran in about 10
> > minutes on an already vacuumed DB.
>
> Ah, that's more like it...
>
> > 1) With 6.5.2 should I still drop the indexes prior to running vacuum?
>
> Probably, but I'm not sure. Try it both ways and see.
>
> > 2) Is there a command/script that will cleanup the temp files and previous
> > table files in the DB directory. The reason I ask: I now have 3 copies of
> > my currnt table file (currnt, currnt.1, currnt.2) for a total usage of
> > about 3GB. Since it appears that currnt.2 is the one being accessed, can I
> > safely delete currnt & currnt.1?
>
> NO NO NO NO NO!!!!
>
> Files named like that are NOT temp files!! What they are are segments
> of a large table. We segment big tables into gigabyte-sized chunks
> to avoid problems on systems that have an int32-related limit on the
> size of individual files (which is most Unixes these days, I think).
>
> If you see anything named like pg_tempNNN.NNN, then that really is
> a temp file, and if it's not got a very recent mod time then it's
> probably left over from a crashed backend. Old temp files should be
> safe to get rid of. (IIRC, one of the components of the name is the
> PID of the creating backend, so you can check to be sure that the
> backend is no longer around if you want to be doubly sure.)
>
> regards, tom lane

In response to

Browse pgsql-sql by date

  From Date Subject
Next Message Tom Lane 1999-10-07 14:08:30 Re: [SQL] Questions about vacuum analyze
Previous Message Sergey Bondarenko 1999-10-06 06:38:33 Re: [SQL] 2 Aktions in a Rule