I’ve tested two different options on how to restore a failed node in a 2-node S2D Cluster with one Storage Pool. Here are the necessary steps on how to restore a failed node with existing physical server (OS re-install) and new physical server (hardware failure).
Existing Physical Server (OS re-install)
Let’s assume that one of the nodes failed and you need to re-install your Operating System.
Remove the failed Node from the S2D Cluster.
|
Remove-ClusterNode <Name> |
Re-install the failed node and update it (You can use the same computer name and IP address).
Don’t modify or remove any disks from your Storage Pool.
Install roles and features needed for the Cluster.
|
Install-WindowsFeature -Name "Hyper-V", "Failover-Clustering", "Data-Center-Bridging", "RSAT-Clustering-PowerShell", "Hyper-V-PowerShell", "FS-FileServer" |
Add the node back to the cluster (it’s recommend to run cluster validation before adding your node back to the cluster).
Once done, monitor status of your Disks in the Storage Pool. The missing disks should become healthy in a short time.

When disks are healthy check the status of your Virtual Disks with Get-VirtualDisk cmdlet and wait until the status is changed to Healthy.
|
Get-VirtualDisk | Select-Object FriendlyName, HealthStatus, OperationalStatus |

You should now have a healthy S2D Cluster.
New Physical Server (hardware failure)
In this example let’s assume that you need to replace your physical server (with new disks for Storage Pool) due the hardware failure.
Change the Usage of the failed disks to Retired.
|
$disk = Get-StoragePool *S2D* | Get-PhysicalDisk | ? OperationalStatus -NE "OK" Set-PhysicalDisk -InputObject $disk -Usage Retired |

Remove the failed Node from the Cluster.
|
Remove-ClusterNode <Name> |
Install roles and features on your new node.
|
Install-WindowsFeature -Name "Hyper-V", "Failover-Clustering", "Data-Center-Bridging", "RSAT-Clustering-PowerShell", "Hyper-V-PowerShell", "FS-FileServer" |
Add the new node to the Cluster (it’s recommend to run cluster validation before adding your node back to the cluster).
Monitor the health of Virtual Disks and wait until status is Healthy.
|
Get-VirtualDisk | Select-Object FriendlyName, HealthStatus, OperationalStatus |
Remove failed disks (with status Retired) from the Cluster.
|
$Pool = Get-StoragePool *S2D* $RetiredDisk = Get-StoragePool *S2D* | Get-PhysicalDisk | ? Usage -EQ "Retired" Remove-PhysicalDisk -StoragePool $Pool -PhysicalDisks $RetiredDisk |

Conclusion
Keep in mind that this was tested on 2-node S2D Cluster with only one Storage Pool. If you are using multiple Storage Pools you will need to customize cmdlet’s to remove or add the disks to correct Storage Pool.