I can't imagine how you wouldn't lose data. Raid creates a redundant disk array that creates a partition over a set of disks, all of your disks are working when your OS loads. I've never heard of someone setting up RAID with one disk and original first partition. As far as I've seen it, you always install an OS across more then one disk, with an alternate install option, with windows it's f5 and with linux it just says 'alternate' install. Or I think with gparted you can partition a set of disks into RAID array, considering you have the RAID controller on your server, which many server boards do have, but not sure of yours.
I don't know, maybe it's possible to set up a RAID array and leave an original partition and just start with the second partition across another set of disks. I can't imagine why you'd want to do this. Most people set up RAID with an even amount of disks, and therefore the disks write data either mirroring the data or with striping or parity, they use even disks, both or all 250 gb, 500gb, terabyte, etc. that way the disks fill up evenly.
Btw RAID 5 doesn't mean 5 disks, it's the specification for a certain type of RAID. This is Raid 5:
RAID 5 Diagram of a RAID 5 setup with distributed parity with each color representing the group of blocks in the respective parity block (a stripe). This diagram shows left asymmetric algorithm
A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity because of its low cost of redundancy. This can be seen by comparing the number of drives needed to achieve a given capacity. For an array of n drives, with S_{\mathrm{min}} being the size of the smallest disk in the array, other RAID levels that yield redundancy give only a storage capacity of S_{\mathrm{min}} (for RAID 1), or S_{\mathrm{min}} \times (n/2) (for RAID 1+0). In RAID 5, the yield is S_{\mathrm{min}} \times (n - 1). For example, four 1 TB drives can be made into two separate 1 TB redundant arrays under RAID 1 or 2 TB under RAID 1+0, but the same four drives can be used to build a 3 TB array under RAID 5. Although RAID 5 may be implemented in a disk controller, some have hardware support for parity calculations (hardware RAID cards with onboard processors) while some use the main system processor (a form of software RAID in vendor drivers for inexpensive controllers). Many operating systems also provide software RAID support independently of the disk controller, such as Windows Dynamic Disks, Linux mdadm, or RAID-Z. In most implementations, a minimum of three disks is required for a complete RAID 5 configuration. In some implementations a degraded RAID 5 disk set can be made (three disk set of which only two are online), while mdadm supports a fully functional (non-degraded) RAID 5 setup with two disks - which functions as a slow RAID-1, but can be expanded with further volumes.
In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1. RAID 5 parity handling
A concurrent series of blocks - one on each of the disks in an array - is collectively called a stripe. If another block, or some portion thereof, is written on that same stripe, the parity block, or some portion thereof, is recalculated and rewritten. For small writes, this requires:
Read the old data block Read the old parity block Compare the old data block with the write request. For each bit that has flipped (changed from 0 to 1, or from 1 to 0) in the data block, flip the corresponding bit in the parity block Write the new data block Write the new parity block
The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the controller.
The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of blocks in the stripe fails due to failure of any one of the disks, and the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data from the failed drive on-the-fly.
This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation. RAID 5 recovery issues
In the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data. If this is not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the missing block in that stripe. This potential vulnerability is sometimes known as the write hole. Battery-backed cache and similar techniques are commonly used to reduce the window of opportunity for this to occur. The same issue occurs for RAID-6.
I wouldn't use that RAID array, I'd use RAID 1+0, and go with 6 terabyte disks if you REALLY want to go crazy:
"RAID 10": a stripe made of mirrors
So-called RAID 10 arrays consist of a top-level RAID-0 array (or stripe set) composed of two or more RAID-1 arrays (or mirrors). A single-drive failure in a RAID 10 configuration results in one of the lower-level mirrors entering degraded mode, but the top-level stripe may be configured to perform normally (except for the performance hit), as both of its constituent storage elements are still operable—this is application-specific.
The failed drive is replaced with a spare, the low-level mirror is rebuilt from the remaining good drive(s), and no change is necessary for the stripe set. (Though the performance of the top-level RAID 0 stripe set will be degraded during the rebuild of the low-level RAID-1 mirror, stripe sets do not have a degraded mode per se).
This is pretty much RAID 0 and 6, it's mirroring on two disks, in three sets, so the striped set of parity is also included. Therefore you have data written quickly striped in parity, but mirrored in each striped set. I think I said that correctly. If you want to go with cheap RAID, go with RAID 1:
RAID 1 Diagram of a RAID 1 setup
An exact copy (or mirror) of a set of data on two disks. This is useful when read performance or reliability is more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see reliability geometrically) over a single disk. Since each member contains a complete copy and can be addressed independently, ordinary wear-and-tear reliability is raised by the power of the number of self-contained copies. RAID 1 failure rate Question book-new.svg This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2010)
As a simplified example, consider a RAID 1 with two identical models of a disk drive, each with a 5% probability that the disk would fail within three years. Provided that the failures are statistically independent, then the probability of both disks failing during the three-year lifetime is 0.25%. Thus, the probability of losing all data is 0.25% over a three-year period if nothing is done to the array. If the first disk fails and is never replaced, then there is a 5% chance the data will be lost. If only one of the disks fails, no data would be lost. As long as a failed disk is replaced before the second disk fails, the data is safe.
However, since two identical disks are used and since their usage patterns are also identical, their failures cannot be assumed to be independent. Thus, the probability of losing all data, if the first failed disk is not replaced, may increase.
As a practical matter, in a well-managed system the above is irrelevant because the failed hard drive will not be ignored but will be replaced. The reliability of the overall system is determined by the probability the remaining drive will continue to operate through the repair period, that is the total time it takes to detect a failure, replace the failed hard drive, and for that drive to be rebuilt. If, for example, it takes one hour to replace the failed drive and 9 hours to repopulate it, the overall system reliability is defined by the probability the remaining drive will operate for ten hours without failure.
While RAID 1 can be an effective protection against physical disk failure, it does not provide protection against data corruption due to viruses, accidental file changes or deletions, or any other data-specific changes. By design, any such changes will be instantly mirrored to every drive in the array segment. A virus, for example, that damages data on one drive in a RAID 1 array will damage the same data on all other drives in the array at the same time. For this reason systems using RAID 1 to protect against physical drive failure should also have a traditional data backup process in place to allow data restoration to previous points in time. This however is also the case with other RAID levels, any system critical enough to require disk redundancy also needs the protection of reliable data backups.
I guess it is called RAID 1+0 because of mirroring and striping. Raid 0 is only used because your read write is a little quicker, considering the data is striped across two disks and easier to write and access. RAID 1+0 is the way to go, striping and mirroring for recovery safety, plus there are three levels. I'm pretty sure RAID 6 is RAID 1+0 but without the 3rd level. I don't know I'm not a RAID expert, but it makes a little more sense right?
--------------------
I did not say to edit my signature soulidarity! Now forever I will never remember what I said about understanding the secrets of the universe by paying attention to subtleties!
I'm never giving you the password again. Jerk
|