From: | Joachim Worringen <joachim(dot)worringen(at)iathh(dot)de> |
---|---|
To: | Postgresql General <pgsql-general(at)postgresql(dot)org> |
Subject: | coping with failing disks |
Date: | 2010-09-02 12:16:20 |
Message-ID: | 4C7F9594.6090407@iathh.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greetings,
we are setting up a new database server with quite some disks for our
inhouse Postgresql-based "data warehouse".
We are considering to use separate sets of disks for indices (index
space on SSDs in this case) and a table space for tables which are used
as temporary tables (but for some reasons are standard tables for
Postgresql). The storage for those should be as fast as possible,
possibly sacrifying reliability for this.
If we would set up the SSDs for the indices as a non-redundant RAID0, it
would be quite likely that this volume became faulty at some point.
Theorectically, this shouldn't hurt us to much as we would just have to
rebuild the indices from the existing, unharmed data. But is it that
simple in pratice? Would the consistency of the database be affected if
all indices are suddenly gone?
The same goes for the temporary tables. If the storage for those becomes
unavailable, only the current queries should be affected. But how can we
tell Postgresql to just forget about those tables, and consider the
remaining database as consistent?
We can afford some down time, obviously.
thanks, Joachim
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2010-09-02 12:39:32 | Re: table - sequence dependent informatio |
Previous Message | Henk van Lingen | 2010-09-02 08:23:33 | Re: Forcing the right queryplan |