Windows Server Failover Clustering (WSFC) is a built-in feature of Windows Server that groups multiple servers into a cluster to provide high availability for workloads like SQL Server, Hyper-V, and file services. When one node fails, another node automatically takes over the workload with minimal downtime. This guide walks through setting up Windows Server 2022/2025 failover clustering from scratch – covering prerequisites, installation, cluster creation, quorum configuration, shared storage, highly available roles, networking, Cluster-Aware Updating, and PowerShell management commands.

Prerequisites for Windows Server Failover Clustering

Before creating a failover cluster on Windows Server 2022 or 2025, make sure these requirements are met:

  • Minimum 2 servers running Windows Server 2022 or 2025 (Datacenter or Standard edition). All nodes must run the same edition and version
  • Active Directory domain membership – all cluster nodes must be joined to the same AD domain. The cluster computer object (CNO) is created in AD during cluster creation. If you need to set up AD first, follow the guide on installing Active Directory Domain Services on Windows Server
  • Shared storage – iSCSI SAN, Fibre Channel, or Storage Spaces Direct (S2D) for clustered disks. At minimum, one shared disk for quorum witness
  • Two network adapters per node – one for client/management traffic and one dedicated for cluster heartbeat (private network)
  • Same subnet or routed connectivity between all nodes. Multi-subnet clusters are supported but require additional DNS and IP configuration
  • DNS resolution – all nodes must resolve each other by hostname. A properly configured DNS server on Windows Server is essential
  • Domain admin or delegated permissions to create computer objects in the target OU
  • Identical hardware configuration recommended across all nodes (same NIC drivers, firmware, storage HBA)
  • Windows Firewall – allow the following ports: TCP 135, 137, 3343, 5985, and dynamic RPC ports (49152-65535). UDP 3343 for cluster communication. Refer to our guide on opening ports in Windows Server Firewall for detailed steps

Step 1: Install the Failover Clustering Feature

The Failover Clustering feature must be installed on every node that will join the cluster. You can install it using PowerShell or Server Manager.

Option A: Install Using PowerShell

Open an elevated PowerShell session on each node and run:

Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

Expected output shows Success as the exit code and True under the Restart Needed column if a reboot is required:

Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    No             Success        {Failover Clustering}

To install the feature on all nodes at once from a single management machine, use PowerShell remoting:

$nodes = "NODE1", "NODE2", "NODE3"
Invoke-Command -ComputerName $nodes -ScriptBlock {
    Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
}

Verify the feature is installed on all nodes:

Invoke-Command -ComputerName $nodes -ScriptBlock {
    Get-WindowsFeature Failover-Clustering | Select-Object Name, InstallState
}

Option B: Install Using Server Manager

If you prefer the GUI approach:

  • Open Server Manager and click Manage > Add Roles and Features
  • Click Next through the wizard until you reach the Features page
  • Check Failover Clustering. When prompted, also add the management tools
  • Click Next and then Install
  • Repeat on every node

Reboot the servers if prompted.

Step 2: Run the Cluster Validation Wizard

Before creating the cluster, run the validation wizard. This is a mandatory step that checks hardware compatibility, network configuration, storage access, and system configuration across all intended nodes. Microsoft support requires a validated configuration.

Validate Using PowerShell

Run the full validation test suite from any node:

Test-Cluster -Node NODE1, NODE2 -Include "Storage", "Inventory", "Network", "System Configuration"

For a full validation with all test categories:

Test-Cluster -Node NODE1, NODE2

The validation report is saved as an HTML file. Review it for any errors or warnings:

$report = Test-Cluster -Node NODE1, NODE2
Start-Process $report.FullName

Validate Using Failover Cluster Manager

Open Failover Cluster Manager from Server Manager or by running cluadmin.msc. In the center pane under Management, click Validate Configuration. Add the server names and run all tests. Review the report for any failures before proceeding.

Common validation failures and fixes:

  • Network warning about single network – add a dedicated heartbeat NIC on a separate subnet
  • Storage test failures – verify all nodes can see the shared LUNs through iSCSI Initiator or disk management
  • Software update differences – install the same Windows patches on all nodes
  • Domain membership issues – confirm all nodes are in the same AD domain and can resolve each other

Step 3: Create the Failover Cluster

Once validation passes, create the cluster. You need a cluster name (the CNO – Cluster Name Object) and a static IP address for the cluster.

Create Cluster Using PowerShell

Run this command from any one of the validated nodes:

New-Cluster -Name "YOURCLUSTER" -Node NODE1, NODE2 -StaticAddress 192.168.1.100 -NoStorage

The -NoStorage flag skips automatic addition of shared disks. This is recommended so you can add and configure storage manually afterward. Replace YOURCLUSTER with your desired cluster name and 192.168.1.100 with the IP you have reserved for the cluster.

Verify the cluster was created successfully:

Get-Cluster | Format-List Name, Domain, SharedVolumesRoot

Sample output:

Name             : YOURCLUSTER
Domain           : yourdomain.local
SharedVolumesRoot : C:\ClusterStorage

Check the nodes are online:

Get-ClusterNode | Format-Table Name, State, NodeWeight

Expected output:

Name   State NodeWeight
----   ----- ----------
NODE1  Up             1
NODE2  Up             1

Create Cluster Using Failover Cluster Manager

In Failover Cluster Manager, click Create Cluster in the Actions pane. The wizard walks through selecting servers, running validation (if not already done), specifying the cluster name and IP address, and confirming the configuration. The cluster is created after you click Finish.

Step 4: Configure Cluster Quorum

Quorum determines how many node failures the cluster can survive while remaining online. A proper quorum configuration is critical for preventing split-brain scenarios where both halves of a partitioned cluster try to own the same resources.

Quorum Models Explained

Windows Server 2022/2025 supports these quorum witness types:

Witness TypeDescriptionBest For
Node MajorityEach node gets one vote. Majority of votes keeps cluster runningOdd number of nodes (3, 5, 7)
Disk WitnessSmall shared disk (512 MB min) gets one voteEven number of nodes with shared storage
Cloud WitnessAzure blob storage account acts as witnessMulti-site clusters, no shared storage
File Share WitnessSMB file share on a separate server gets one voteLegacy setups, cross-site clusters

Configure Disk Witness

For a two-node cluster with shared storage, a disk witness is the most common choice. First add the small shared disk to the cluster, then set it as the quorum witness:

Set-ClusterQuorum -DiskWitness "Cluster Disk 1"

Verify the quorum configuration:

Get-ClusterQuorum | Format-List Cluster, QuorumResource, QuorumType

Configure Cloud Witness (Azure)

Cloud witness is the recommended approach for clusters without shared storage or for stretched (multi-site) clusters. You need an Azure Storage Account with a general-purpose account (not Blob-only).

Create the Azure storage account first (through Azure Portal or Azure CLI), then configure the cluster:

Set-ClusterQuorum -CloudWitness `
    -AccountName "mystorageaccount" `
    -AccessKey "YourStorageAccountAccessKey"

For Azure Government or other sovereign clouds, add the -Endpoint parameter with the appropriate blob endpoint URL.

Configure File Share Witness

Create an SMB share on a server that is not part of the cluster. Grant the cluster computer object (CNO) Full Control on the share and NTFS permissions. Then configure:

Set-ClusterQuorum -FileShareWitness "\\FILESERVER\ClusterWitness"

Step 5: Add Shared Storage to the Cluster

Clustered workloads need shared storage accessible from all nodes. The two primary approaches are iSCSI-based shared disks and Storage Spaces Direct (S2D).

Option A: Connect iSCSI Shared Storage

On each cluster node, open the iSCSI Initiator and connect to your iSCSI target. Start by enabling the iSCSI service:

Start-Service MSiSCSI
Set-Service MSiSCSI -StartupType Automatic

Connect to the iSCSI target:

New-IscsiTargetPortal -TargetPortalAddress 192.168.1.50
Connect-IscsiTarget -NodeAddress "iqn.2024.com.storage:target01" -IsPersistent $true

Repeat on all cluster nodes. Initialize and format the disk on only one node:

Get-Disk | Where-Object PartitionStyle -eq 'RAW' |
    Initialize-Disk -PartitionStyle GPT -PassThru |
    New-Partition -AssignDriveLetter -UseMaximumSize |
    Format-Volume -FileSystem NTFS -NewFileSystemLabel "ClusterData"

Now add the disk to the cluster:

Get-ClusterAvailableDisk | Add-ClusterDisk

Verify the disks are visible in the cluster:

Get-ClusterResource | Where-Object ResourceType -eq "Physical Disk"

Option B: Enable Storage Spaces Direct (S2D)

Storage Spaces Direct uses local disks on each node to create a software-defined shared storage pool. This is available only on Windows Server Datacenter edition and requires at minimum 2 nodes with at least 2 capacity drives each.

After creating the cluster (with -NoStorage), enable S2D:

Enable-ClusterStorageSpacesDirect -Confirm:$false

This pools all eligible local disks across nodes into a single storage pool. Create a virtual disk and volume:

New-Volume -StoragePoolFriendlyName "S2D on YOURCLUSTER" `
    -FriendlyName "ClusterVol01" `
    -FileSystem CSVFS_ReFS `
    -Size 500GB `
    -ResiliencySettingName Mirror

The volume is automatically added as a Cluster Shared Volume (CSV) and mounted at C:\ClusterStorage\ClusterVol01 on all nodes.

Verify S2D health:

Get-StorageSubSystem *Cluster* | Get-StorageHealthReport

Step 6: Configure Cluster Networking

A properly designed cluster network uses separate networks for different traffic types. At minimum, configure two networks: one for client access and one for internal cluster communication (heartbeat).

View Cluster Networks

List the networks the cluster has detected:

Get-ClusterNetwork | Format-Table Name, State, Role, Address

The Role property controls what each network is used for:

  • 0 – Network is not used by the cluster
  • 1 – Cluster communication only (heartbeat)
  • 3 – Both cluster communication and client access (default)

Configure Heartbeat Network

Rename the networks for clarity and assign the correct roles. The heartbeat network should be on a dedicated private subnet (for example, 10.10.10.0/24):

# Rename networks for clarity
(Get-ClusterNetwork -Name "Cluster Network 1").Name = "Client-Network"
(Get-ClusterNetwork -Name "Cluster Network 2").Name = "Heartbeat-Network"

# Set heartbeat network to cluster-only communication
(Get-ClusterNetwork -Name "Heartbeat-Network").Role = 1

# Set client network for both client and cluster traffic
(Get-ClusterNetwork -Name "Client-Network").Role = 3

Verify the configuration:

Get-ClusterNetwork | Format-Table Name, Role, State, Address -AutoSize

Configure Live Migration Network

For Hyper-V clusters, dedicate a network for live migration traffic to prevent VM migrations from saturating your client or heartbeat networks:

Get-ClusterResourceType -Name "Virtual Machine" |
    Set-ClusterParameter -Name MigrationExcludeNetworks -Value (
        (Get-ClusterNetwork -Name "Client-Network").Id
    )

Step 7: Create Highly Available Roles

With the cluster running and storage configured, deploy highly available roles. These are the workloads that failover between nodes automatically.

Highly Available File Server

Create a clustered file server role with a virtual name and IP that clients connect to:

Add-ClusterFileServerRole -Name "FS01" `
    -Storage "Cluster Disk 2" `
    -StaticAddress 192.168.1.110

Then create a shared folder on the clustered file server:

New-SmbShare -Name "SharedDocs" `
    -Path "D:\SharedDocs" `
    -ScopeName "FS01" `
    -FullAccess "YOURDOMAIN\Domain Users"

For Scale-Out File Server (SOFS) used with Hyper-V or SQL Server workloads, create it with the SOFS flag:

Add-ClusterScaleOutFileServerRole -Name "SOFS01"

SQL Server Always On Availability Group

SQL Server Always On Availability Groups (AG) require an underlying WSFC cluster. After installing SQL Server on each node, enable Always On and create the AG:

Enable Always On in SQL Server Configuration Manager on each node, or via PowerShell:

Enable-SqlAlwaysOn -ServerInstance "NODE1" -Force
Enable-SqlAlwaysOn -ServerInstance "NODE2" -Force

Create the Availability Group using T-SQL in SQL Server Management Studio:

CREATE AVAILABILITY GROUP [AG01]
WITH (AUTOMATED_BACKUP_PREFERENCE = SECONDARY)
FOR DATABASE [YourDatabase]
REPLICA ON
    N'NODE1' WITH (
        ENDPOINT_URL = N'TCP://NODE1.yourdomain.local:5022',
        AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
        FAILOVER_MODE = AUTOMATIC),
    N'NODE2' WITH (
        ENDPOINT_URL = N'TCP://NODE2.yourdomain.local:5022',
        AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
        FAILOVER_MODE = AUTOMATIC);

ALTER AVAILABILITY GROUP [AG01]
ADD LISTENER N'AG01-Listener' (
    WITH IP ((N'192.168.1.120', N'255.255.255.0')),
    PORT = 1433);

Hyper-V Virtual Machine High Availability

Install the Hyper-V role on all cluster nodes first:

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart

After reboot, configure the cluster to host highly available VMs. Store VM files on Cluster Shared Volumes (CSV):

# Add a cluster disk as CSV
Add-ClusterSharedVolume -Name "Cluster Disk 2"

# Create a new highly available VM
New-VM -Name "WebServer01" `
    -MemoryStartupBytes 4GB `
    -Path "C:\ClusterStorage\Volume1" `
    -NewVHDPath "C:\ClusterStorage\Volume1\WebServer01\disk0.vhdx" `
    -NewVHDSizeBytes 100GB `
    -Generation 2

# Make the VM highly available
Add-ClusterVirtualMachineRole -VMName "WebServer01"

Verify the VM is clustered:

Get-ClusterGroup | Where-Object GroupType -eq "VirtualMachine" | Format-Table Name, State, OwnerNode

Step 8: Set Up Cluster-Aware Updating (CAU)

Cluster-Aware Updating automates Windows Update across cluster nodes, draining and pausing one node at a time to maintain availability throughout the patching process.

Enable CAU Self-Updating Mode

In self-updating mode, the cluster patches itself on a schedule without external intervention. Install the CAU role on the cluster:

Add-CauClusterRole -ClusterName "YOURCLUSTER" `
    -DaysOfWeek Tuesday `
    -WeeksOfMonth 2 `
    -MaxRetriesPerNode 3 `
    -RequireAllNodesOnline `
    -Force

This schedules updates for the second Tuesday of every month (aligning with Microsoft Patch Tuesday). The cluster will drain workloads from one node, apply updates, reboot if needed, then move to the next node.

Run CAU Manually

To trigger an immediate update run:

Invoke-CauRun -ClusterName "YOURCLUSTER" -MaxRetriesPerNode 3 -Force

Check CAU status and results:

Get-CauRun -ClusterName "YOURCLUSTER" -Detailed

CAU Using Failover Cluster Manager

Open Failover Cluster Manager, connect to the cluster, and click Cluster-Aware Updating in the left pane. From there you can configure self-updating options, preview applicable updates, or manually initiate an update run with point-and-click controls.

Step 9: Monitoring and Troubleshooting the Cluster

Regular monitoring helps catch issues before they cause downtime. Windows Server 2022/2025 provides multiple tools for cluster health monitoring.

Check Cluster Health

Get an overview of all cluster resources and their states:

Get-ClusterGroup | Format-Table Name, State, OwnerNode -AutoSize
Get-ClusterResource | Format-Table Name, State, ResourceType, OwnerGroup -AutoSize

Check for any failed resources:

Get-ClusterResource | Where-Object State -eq "Failed" | Format-Table Name, ResourceType, OwnerGroup

Review Cluster Events

Cluster events are logged in the system and dedicated cluster event channels. Query recent failover events:

Get-WinEvent -LogName "Microsoft-Windows-FailoverClustering/Operational" -MaxEvents 50 |
    Where-Object LevelDisplayName -eq "Error" |
    Format-Table TimeCreated, Id, Message -Wrap

Generate a cluster diagnostic log for deeper troubleshooting:

Get-ClusterLog -Destination C:\Temp -TimeSpan 60

This collects the last 60 minutes of cluster logs from all nodes and saves them to C:\Temp.

Common Troubleshooting Commands

Test cluster network connectivity between nodes:

Get-ClusterNetwork | Get-ClusterNetworkInterface | Format-Table Name, Node, Network, State

Check cluster validation to identify emerging issues:

Test-Cluster -Node (Get-ClusterNode).Name -Include "Network", "System Configuration"

Repair a cluster node that is in a quarantined state:

Start-ClusterNode -Name "NODE2" -ClearQuarantine

PowerShell Commands for Failover Cluster Management

Here is a reference of the most useful PowerShell cmdlets for day-to-day cluster management on Windows Server 2022/2025.

Cluster Information

# View cluster details
Get-Cluster | Format-List *

# List all cluster nodes
Get-ClusterNode | Format-Table Name, State, NodeWeight

# List all cluster groups (roles)
Get-ClusterGroup | Format-Table Name, State, OwnerNode, GroupType

# List all cluster resources
Get-ClusterResource | Format-Table Name, State, ResourceType, OwnerGroup

# List cluster shared volumes
Get-ClusterSharedVolume | Format-Table Name, State, OwnerNode

Node Management

# Pause a node (drain roles before maintenance)
Suspend-ClusterNode -Name "NODE1" -Drain

# Resume a paused node
Resume-ClusterNode -Name "NODE1"

# Evict a node from the cluster
Remove-ClusterNode -Name "NODE3" -Force

# Add a new node to existing cluster
Add-ClusterNode -Name "NODE3" -Cluster "YOURCLUSTER"

Resource and Group Management

# Move a cluster group to another node
Move-ClusterGroup -Name "SQL Server (MSSQLSERVER)" -Node "NODE2"

# Start a stopped cluster resource
Start-ClusterResource -Name "SQL Server"

# Stop a cluster resource
Stop-ClusterResource -Name "SQL Server"

# Manually fail over all groups to another node
Get-ClusterGroup | Move-ClusterGroup -Node "NODE1"

# Test failover of a specific group
Move-ClusterGroup -Name "FS01" -Node "NODE2"

Storage Management

# List available disks to add
Get-ClusterAvailableDisk

# Add available disks to the cluster
Get-ClusterAvailableDisk | Add-ClusterDisk

# Convert a cluster disk to CSV
Add-ClusterSharedVolume -Name "Cluster Disk 2"

# Check disk health
Get-ClusterResource -Name "Cluster Disk 1" | Get-ClusterParameter DiskPath

Conclusion

You now have a fully functional Windows Server 2022/2025 failover cluster with properly configured quorum, shared storage, cluster networking, and highly available roles. The cluster will automatically fail over workloads between nodes during planned maintenance or unexpected failures. For production environments, implement regular backup of cluster configuration using Get-ClusterGroup | Export-Clixml, monitor cluster health with System Center or Windows Admin Center, and keep all nodes patched using Cluster-Aware Updating on a regular schedule.

Related Guides

LEAVE A REPLY

Please enter your comment!
Please enter your name here