We’ve been building standalone virtualization solutions since virtual server 2005.
We’ve been building Hyper-V virtualization solutions since Longhorn. We built out our first cluster not long after 2008 RTMd though it took about 6 months of life to figure the whole setup out!
Here are some points to consider when looking to build a virtualization solution whether standalone or clustered on Hyper-V.
- CPU: GHz over Cores
- Memory: Largest, fastest for CPU, prefer one stick per channel, and same size/speed on all channels
- BIOS: Disable C3/C6 States
- BIOS: Enable High Performance Profile
- Disk subsystem: Hardware RAID, 1GB Cache, Non-Volatile or Battery backed
- Disk subsystem: SAS only, 10K spindles to start, and 8 or more preferable
- RAID 6 with 90GB Logical Disk for OS and Balance for VHDX and ISO files
- Networking: Intel only, 2x Dual-Port NICs or more ports.
- Networking: Team Port 0 on both NICs for management Port 1+ for vSwitch
- OPTION: Team Port 0 for Management and bind one vSwitch per port to team _within_ VM OS
- Networking:Broadcom NICs Disable VMQ for ALL physical ports
- Hyper-V: Server Core has a reduced attack surface plus lower update count thus requiring fewer reboots
- Hyper-V: Fixed VHDX files preferred unless dedicated LUN/Partition
- We set a cut-off of about 12 VMs before we look to deploy one or two LUNs/Partitions for VHDX files
- Hyper-V: Max vCPUs to Assign = # Physical Cores on ONE CPU – 1
- Hyper-V: Leave ~1.5GB physical RAM to the host OS
- Hyper-V: Set a 4192MB static Swap File for host OS on C:
- Hyper-V: Standalone preferred to keep Workgroup and use HVRemote
BIOS Settings
The C3/C6 states can actually impact Live Migration performance, Storage Performance, and more so it is best to disable them from get-go.
It is a good idea to enable High Performance mode for the server. Doing so enables a number of settings that improves data flow throughout the system as well as cooling profiles that help keep the system temperatures down.
Network Adapters
Our preference is for Intel NICs since they tend to run a lot more stable than the Broadcom NICs do. Witness the issues with Broadcom firmware and drivers and VMQ. If Broadcom is in place then make sure to disable VMQ to improve network access performance to the VMs.
A minimum of 2 NICs should be in place. A pair of teams, one for management and one for the vSwitch, utilizing one port each on a dual-port NIC setup is best to protect against NIC failure. If using quad-port NICs then team port 0 on both NICs for management and team port 1-3 for the vSwitch on both NICs. It is preferable to _never_ use one NIC port dedicated to a VM. This defeats the redundancy virtualization brings to the table.