Intel® VROC RAID Advanced Usages in Linux
This chapter introduces advanced management operations available for Intel® VROC RAID on Linux*. Advanced usages include Online Capacity Expansion (OCE), RAID level migration, changing RAID chunk size, and enabling features such as Partial Parity Log (PPL) for RAID 5 write-hole protection and write-intent bitmaps. Many of these operations trigger background reshape or resync processes. While data is intended to remain intact, administrators are strongly advised to perform full data backups before executing any advanced operation.
6.1 Changing RAID Volume Name
Changing a RAID volume name should only be performed when the volume is inactive.
Check the current status of the Intel® VROC RAID volume to confirm it is not active.
Run the update command to rename the volume. In this example, /dev/md/imsm is the container device, and the name of subarray 0 is changed to vol:
mdadm --update-subarray=0 --update=name --name=vol /dev/md/imsmThe --update-subarray parameter specifies the volume index inside the container. Indexing starts at 0. If multiple volumes exist, adjust the index accordingly. The index can be confirmed from the RAID metadata.
Expected result:
mdadm: Updated subarray-0 name from /dev/md/imsm, UUIDs may have changedReassemble the volume with the updated name to bring it back online.
6.2 Enabling and Disabling PPL for RAID 5 RWH Protection
RAID Write Hole (RWH) is a potential fault scenario in parity-based RAID 5. It occurs when a power failure or system crash happens simultaneously or very close to a drive failure. Such correlated failures can lead to silent data corruption.
To address this, Intel® VROC RAID 5 supports the Partial Parity Log (PPL) feature:
With PPL enabled, resync of the array is not required after a dirty shutdown.
By default, PPL is disabled unless explicitly enabled at RAID creation.
PPL can also be enabled or disabled on an active RAID 5 volume.
Enable PPL on an active RAID 5 volume:
mdadm --grow --consistency-policy=ppl /dev/md/volExpected result: no direct output. Verify with mdadm --detail /dev/md/vol; the Consistency Policy should display ppl.
Disable PPL on an active RAID 5 volume:
mdadm --grow --consistency-policy=resync /dev/md/volExpected result: no direct output. Verify with mdadm --detail /dev/md/vol; the Consistency Policy should display resync.
6.3 Enabling and Disabling Write-Intent Bitmap
Bitmap is a RAID feature that records blocks of data to which the changes are made. This way there is no need to sync the whole array but just the blocks that changed. This feature can be useful when system reboots after dirty shutdown. Bitmap works only on RAID levels with redundancy.
The bitmap feature can be enabled when creating the RAID volume. The following is an example to create an Intel VROC RAID 5 volume with bitmap feature enabled:
# mdadm --create /dev/md/imsm0 /dev/nvme[0-2]n1 --raid-devices=3 --metadata=imsm
# mdadm --create /dev/md/imsm0 /dev/nvme[0-2]n1 --raid-devices=3 --metadata=imsm
# mdadm --create /dev/md/r5 /dev/md/imsm0 --raid-devices=3 --level=5 --bitmap=internalThe bitmap feature can also be enabled or disabled on an active Intel VROC RAID volume. The first step is to stop the active RAID volume because it is an offline operation. Remember to save the RAID configuration beforehand to assemble the volume again afterwards. The following example lists the commands to enable/disable the bitmap feature.
This is an example to enable write-intent bitmap on an active RAID volume:
# mdadm --stop /dev/md/<volume_name>
# mdadm --update-subarray=<subarray_index> --update=bitmap /dev/md/<container_name> # mdadm --assemble /dev/md/<volume_name>The --update-subarray parameter requires the subarray index inside the container, starting at 0. Adjust as needed if multiple volumes exist. The index can be confirmed from RAID metadata.
This is an example to disable write-intent bitmap on an active RAID volume:
# mdadm --stop /dev/md/<volume_name>
# mdadm --update-subarray=<subarray_index> --update=no-bitmap /dev/md/<container_name> # mdadm --assemble /dev/md/<volume_name>Important: mdadm --grow operations cannot be performed while bitmap is enabled. Disable bitmap first, perform the grow operation, and then re-enable bitmap if required.
a. When bitmap is enabled, an additional line starting with bitmap is shown. The bitmap information in the /proc/mdstat file is as follows:
bitmap:
0/8 pages [0KB],
65536KB chunk
Bitmap line header
How many pages are allocated and used, as well as the allocated size
Size of the chunk data
mapped into each bit in
the bitmap
By checking RAID volume details. In the detailed information of a RAID volume, the enabled bitmap information is shown as follows:
Intent Bitmap : Internal Consistency Policy : bitmapa. If bitmap is disabled, the “Intent Bitmap” field is not visible, and “Consistency Policy” field shows “resync” or “ppl”.
By reading the RAID metadata. If the bitmap is enabled, the following can be observed in the RAID metadata
6.4 Changing RAID Chunk Size
Caution: Even though the data is expected to remain intact, it is strongly recommended to back up all user data before changing the RAID chunk size.
A RAID array’s chunk size defines the minimum block of data written to each disk during I/O operations. The optimal chunk size depends on workload characteristics and drive specifications:
Small I/O workloads may benefit from smaller chunk sizes.
Sequential or large file workloads often benefit from larger chunk sizes.
Intel® VROC Linux* supports changing the chunk size for RAID 0 and RAID 5 arrays. Valid values range from 4KB to 128KB, with 128KB as the default.
Change RAID chunk size (example: 64KB):
mdadm --grow /dev/md/volume --chunk=64By default, chunk size is specified in kilobytes. You may add the M suffix to specify megabytes. The provided chunk size must be a power of 2, with a minimum of 4KB.
Expected output:
mdadm: level of /dev/md/volume changed to raid4After executing the command, a RAID reshape process begins in the background. For RAID 0, the array temporarily uses RAID 4 during reshaping. Migration progress can be monitored using /proc/mdstat or mdadm --detail.
6.5 Online Capacity Expansion (OCE)
The Online Capacity Expansion (OCE) feature allows you to increase the capacity of a RAID volume while it remains online and in active use. This eliminates downtime and supports continuous business operations.
There are two supported methods for expanding capacity:
Expanding to maximum drive capacity or a larger size than the initial configuration.
Adding one or more drives to increase the overall RAID volume capacity.
Caution: Always back up user data before performing OCE operations.
OCE on Existing RAID Member Disks
By default, when creating an Intel® VROC RAID volume with mdadm, the maximum available drive capacity is used unless the --size option is specified. Administrators may also designate a smaller initial size with --size. Online Capacity Expansion (OCE) can then be applied later to extend the volume size, provided there is free space on each RAID member disk.
OCE on existing member disks is supported for RAID 1, RAID 5, and RAID 10. If multiple RAID volumes exist within one container, only the last volume can perform OCE.
Expansion sizes can be specified in two ways:
max: grow the volume to use the maximum available capacity.
Explicit value: specify the desired size.
Examples:
# mdadm --grow /dev/md/vol1 --size=max
# mdadm --grow /dev/md/vol2 --size=200G
# mdadm --grow /dev/md/vol3 --size=1T
# mdadm --grow /dev/md/vol1 --size=max
# mdadm --grow /dev/md/vol2 --size=200G
# mdadm --grow /dev/md/vol3 --size=1TOCE by Adding New Drives
Intel® VROC Linux* also supports OCE by adding new drives. This method is available for the following configurations:
RAID 0
RAID 5
Intel® Matrix RAID with RAID 0 and RAID 5
Intel® Matrix RAID with two RAID 0 arrays
Intel® Matrix RAID with two RAID 5 arrays
OCE by adding drives is a two-step operation:
Add new spare drives to the container.
Perform OCE to increase the number of RAID member devices.
Example: Expand to 4 member devices in container /dev/md/imsm:
mdadm --grow --raid-devices=4 /dev/md/imsmOn success, the command produces no output. A background reshape process starts immediately.
6.6 RAID Level Migration
The RAID level migration feature allows you to convert an existing RAID volume to a different RAID level while the array remains online. Data is preserved during migration; however, it is strongly recommended to back up all data before performing migration operations.
The following table shows supported migration paths with Intel® IMSM metadata. Ensure that the required number of drives for the target RAID level are available, often as spare drives.
Table 6-1. Migration Capabilities with IMSM

6.6.1 RAID 1 to RAID 0 Migration
Two steps are required to migrate from RAID 1 to RAID 0.
Migrate from 2-disk RAID 1 to 1-disk RAID 0
The following command shows an example of migrating from a 2-disk RAID 1 volume (/dev/md/volume) to a 1-disk RAID 0:
mdadm --grow /dev/md/volume --level=0The command will return immediately with the following output.
The migration result can be verified by checking the /proc/mdstat file or reading the RAID volume details.
mdadm: level of /dev/md/volume changed to raid0Use OCE to migrate 1-disk RAID 0 to 2-disk RAID 0
The RAID reshaping process will be performed in the background.
The following command shows an example of growing 1-disk RAID 0 to 2-disk RAID 0 in the /dev/md/imsm container:
# mdadm --grow /dev/md/imsm --raid-devices=26.6.2 RAID 10 → RAID 0 Migration
RAID 10 is a combination of two RAID 1 arrays configured in RAID 0. Therefore, the migration process from RAID 10 to RAID 0 follows a similar two-step approach as RAID 1 to RAID 0 migration.
Migrate from 4-disk RAID 10 to 2-disk RAID 0
The following command shows an example of migrating from a 4-disk RAID 10 volume (/dev/md/volume) to a 2-disk RAID 0:
# mdadm --grow /dev/md/volume --level=0The command will return immediately with the following output.
The migration result can be verified by checking the /proc/mdstat file or by reviewing the RAID volume details.
mdadm: level of /dev/md/volume changed to raid0Use OCE to migrate 2-disk RAID 0 to 4-disk RAID 0
The RAID reshaping process will be performed in the background.
The following command shows an example of growing a 2-disk RAID 0 to a 4-disk RAID 0 in the /dev/md/imsm container:
# mdadm --grow /dev/md/imsm --raid-devices=46.6.3 RAID 0 to RAID 10 Migration
Intel® VROC RAID 10 supports configurations with four drives only. Therefore, migration is limited to transitioning from a 2-disk RAID 0 to a 4-disk RAID 10 configuration, as RAID 10 is essentially a striped (RAID 0) array over mirrored (RAID 1) pairs.
This process requires two main steps.
Add Two Spare Drives to the Container
The first step is to add two spare drives to the container.
The following commands show an example of adding two spare drives (nvme0n1 and nvme1n1) to the container device /dev/md/imsm0 for a 2-disk RAID 0 volume:
# mdadm --add /dev/md/imsm0 /dev/nvme0n1
# mdadm --add /dev/md/imsm0 /dev/nvme1n1This ensures that sufficient drives are available for the RAID 10 migration.
Migrate 2-Disk RAID 0 to 4-Disk RAID 10
Next, perform the migration from the existing 2-disk RAID 0 volume (/dev/md/volume) to a 4-disk RAID 10 configuration using the following command:
# mdadm --grow /dev/md/volume --level=10Expected output:
mdadm: level of /dev/md/volume changed to raid10After executing the command, the RAID recovery (resyncing) process will automatically start in the background.
You can monitor the migration progress by running: /proc/mdstat file.
6.6.4 RAID 0 to RAID 5 Migration
The migration from RAID 0 to RAID 5 requires adding a hot spare drive to the container before performing the operation. Intel® VROC RAID 5 requires a minimum of three drives. Therefore, the smallest supported configuration for migration is from a 2-disk RAID 0 to a 3-disk RAID 5. You can also expand a RAID 5 array to include more drives later using Online Capacity Expansion (OCE).
The following example illustrates how to migrate from a 2-disk RAID 0 to a 3-disk RAID 5.
Add a Spare Drive to the Container
Add one spare drive to the container before starting the migration.
The following command adds a spare drive (nvme0n1) to the container device /dev/md/imsm0 for a 2-disk RAID 0 volume:
# mdadm --add /dev/md/imsm0 /dev/nvme0n1Migrate 2-Disk RAID 0 to 3-Disk RAID 5
Next, migrate the existing 2-disk RAID 0 volume to a 3-disk RAID 5. The RAID 5 layout must be set to left-asymmetric, which is the only layout supported by Intel® VROC. Use the following command to perform the migration:
# mdadm --grow /dev/md/volume --level=5 --layout=left-asymmetricExpected output:
mdadm: level of /dev/md/volume changed to raid5After executing the command, the RAID reshaping process will automatically start in the background.
You can monitor the migration progress or confirm completion by checking: /proc/mdstat file or by reading the RAID volume details.
6.6.5 RAID 1 to RAID 5 Migration
Migrating from RAID 1 to RAID 5 allows you to add redundancy with improved storage efficiency.
Migrate from 2-Disk RAID 1 to 2-Disk RAID 0
Convert the mirrored array (RAID 1) to a striped array (RAID 0) to prepare for expansion.
Migrate from 2-disk RAID 0 to 3-disk RAID 5
After converting to RAID 0, add one additional disk and migrate to RAID 5.
Users can further grow the 3-disk RAID 5 to a larger RAID 5 array with more drives through Online Capacity Expansion (OCE).
6.6.6 RAID 10 to RAID 5 Migration
Migration from a 4-disk RAID 10 to a 3-disk RAID 5 can be completed in two steps.
Migrate from 4-disk RAID 10 to 2-disk RAID 0.
Convert the mirrored-striped array (RAID 10) into a striped array (RAID 0) as the first stage of migration.
Migrate from 2-disk RAID 0 to 3-disk RAID 5.
Add one additional disk to the array and migrate from RAID 0 to RAID 5.
Users can further expand the 3-disk RAID 5 to a larger RAID 5 array with additional drives using Online Capacity Expansion (OCE).
Last updated

