From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bernhard Ankenbrand <b(dot)ankenbrand(at)media-one(dot)de> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Crash in postgres/linux on verly large database |
Date: | 2004-04-06 18:07:33 |
Message-ID: | 28447.1081274853@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Bernhard Ankenbrand <b(dot)ankenbrand(at)media-one(dot)de> writes:
> we have a table width about 60.000.000 entrys and about 4GB storage size.
> When creating an index on this table the whole linux box freezes and the
> reiser-fs file system is corrupted on not recoverable.
> Does anybody have experience with this amount of data in postgres 7.4.2?
> Is there a limit anywhere?
Many people run Postgres with databases far larger than that. In any
case a Postgres bug could not cause a system-level freeze or filesystem
corruption, since it's not a privileged process.
I'd guess that you are dealing with a hardware problem: flaky disk
and/or bad RAM are the usual suspects. See memtest86 and badblocks
as the most readily available hardware test aids.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Conway | 2004-04-06 18:15:55 | Re: concat strings but spaces |
Previous Message | Richard Huxton | 2004-04-06 17:59:07 | Re: Crash in postgres/linux on verly large database |