From: | "Zeugswetter Andreas DCP SD" <ZeugswetterA(at)spardat(dot)at> |
---|---|
To: | "Wes" <wespvp(at)syntegra(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] Concurrency problem building indexes |
Date: | 2006-04-25 14:55:00 |
Message-ID: | E1539E0ED7043848906A8FF995BDA579FC326B@m0143.s-mxs.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > Wes, you could most likely solve your immediate problem if you did
an
> > analyze before creating the indexes.
>
> I can try that. Is that going to be a reasonable thing to do when
there's
> 100 million rows per table? I obviously want to minimize the number
of
> sequential passes through the database.
No, I think it would only help if it gets the exact tuple count.
For large tables it only gets an exact count with a full scan
(use vacuum instead of analyze).
Then again, when the table is large, the different "create index"es
should finish at sufficiently different times, so an analyze might
be sufficient to fix the problem for small tables.
(analyze is fast for large tables since it only does a sample)
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-04-25 14:58:05 | Re: [GENERAL] Concurrency problem building indexes |
Previous Message | REYNAUD Jean-Samuel | 2006-04-25 14:32:57 | FOR UPDATE lock problem ? |