Host Resource Protection on Hyper-V 2016
Host Resource Protection is a Security mechanism which continuously monitors the virtual machines within a Hyper-V Host, detects those who are not “playing well” and prevents the excessive use of CPU Usage. When the mechanism detects a virtual machine with excessive activity, the virtual machine is given fewer CPU resources and slows down.
This mechanism is disabled by default and can only be enabled through Powershell using the following cmdlet:
Set–VMProcessor –EnableHostResourceProtection $True
To disable Host Resource Protection:
Set–VMProcessor –EnableHostResourceProtection $False
More posts on Windows Server 2016 Virtualization:
Powershell Direct is the new cool feature that came along with Windows 10 and Windows Server 2016 ( Since Technical Preview ). It enables you to run arbitrary PowerShell in a Windows 10 or Windows Server VM Guest directly from your Hyper-V host without worrying about the network configuration or remote management settings.
Operating system requirements:
- Host: Windows 10, Windows Server Technical Preview 2, or later running Hyper-V.
- Guest/Virtual Machine: Windows 10, Windows Server Technical Preview 2, or later.
To connect and manage a VM-Guest using Powershell Direct, just do the following :
1. Open an elevated Powershell Prompt.
2. Find the VM Guest Name that you wish to connect to, execute “Get-VM | Select Name”. Note down the name of the VM.
3. Execute New-PSSession -VMName “nameofthevm” -Credential “username” .
4. To exit the connected session, execute “Exit-PSSession”.
What else can you do with Powershell Direct? You can Invoke commands or execute scripts ( Invoke-Command ) or even copy files ( Copy-Item )!
AVMA in Windows Server 2016
What is AVMA?
AVMA (Automatic Virtual Machine Activation) lets you activate virtual machines running on a licensed virtualization host without having to deal with each individual virtual machine. The process takes place during the startup process of the virtual machine.
What are the requirements for AVMA?
- Datacenter Edition of Windows Server 2012R2 or 2016 as the management operating system.
- Hyper-V Role Enabled.
What can you achieve with AVMA?
- Activate virtual machines in remote locations. The supported OS level of host and virtual machines is Server 2012R2 and higher.
- Activate virtual machines with or without an internet connection.
- Track virtual machine usage and licenses from the virtualization server, without requiring any access rights on the virtualized systems.
- Use it with SPLA to transparently activate tenants virtual machines.
AVMA in Windows Server 2016.
- Can activate virtual machines running Server 2012R2 and 2016.
- Supported editions of Windows 2016 are Full GUI and Server Core. Nano Server installation option is not supported yet.
- Create a virtual machine and install a supported server operating system on it (2012R2 or 2016).
- Install the AVMA key in the virtual machine using GUI or by running the following command from an elevated command prompt.
slmgr /ipk “AVMA Key”
After completing the above steps, the virtual machine will automatically activate against the virtualization host that resides.
AVMA keys for Windows Server 2016
AVMA keys for Windows Server 2012R2
Additional information about AVMA can found in the link below:
Run Hyper-V in a Virtual Machine with Nested Virtualization
By definition, nested virtualization is a feature that allows you to run a Hypervisor inside of a guest virtual machine. Along with the release of Windows Server 2016 lots of cool new features made their appearance and one of those is the support for nested virtualization.
- A Hyper-V host running Windows Server 2016 or Windows 10 Anniversary Update (1607).
- A Hyper-V VM running Windows Server 2016 or Windows 10 Anniversary Update (1607).
- A Hyper-V VM with configuration version 8.0 or greater.
- An Intel processor with VT-x and EPT technology.
Configure Nested Virtualization
- First of all, create a virtual machine!
- While the virtual machine is in the OFF state, run the following Powershell commandlet on the physical Hyper-V host to enable Nested Virtualization for the VM Guest.
Set-VMProcessor -VMName “NameoftheVMGuest” -ExposeVirtualizationExtensions $true
- Start the virtual machine.
- Install Hyper-V Role within the virtual machine, just like you would for a physical server.
Disable Nested Virtualization
You can disable nested virtualization for a stopped virtual machine using the following PowerShell commandlet:
Set-VMProcessor -VMName “NameoftheVMGuest” -ExposeVirtualizationExtensions $false
- Do not enable dynamic memory for the nested Hyper-V, just prealocate all the memory from the start. Recommended at least 4GB of Ram.
- Enable MAC Address Spoofing, that is why you need to route the packets properly since you will be using a virtual switch within another virtual switch (inception).
- VMware ESXi Hypervisor also supported as a nested option.
Node fairness on Hyper-V 2016, VM Load Balancing for the SMB!
A new Virtual Machine Load Balancing feature was introduced along with Windows Server 2016, Node Fairness. What can it do for you? Well, it will optimize the utilization of your Hyper-V Cluster nodes by Load Balancing the Virtualized Workloads within a Failover Cluster.
How it works?
It works straight out of the box actually. It’s enabled by default within a Hyper-V 2016 Failover Cluster and it is triggered based on the following heuristics:
- Based on the Host’s CPU utilization and defined threshold.
- Based on the Host’s memory pressure and defined threshold.
Evaluation occurs every 5 minutes.
|1 (default)||Low||Move when host is more than 80% loaded|
|2||Medium||Move when host is more than 70% loaded|
|3||High||Move when host is more than 60% loaded|
Thresholds are node-centric.
How often Node Fairness Triggered?
- every 30 minutes ( default value )
- everytime a new cluster node joins or rejoining the cluster.
Will it cause Performance issues?
My initial response is “No”. You won’t experience any network, I/O or storage load. Workloads rebalancing using Live Migration.
Should I disable it?
Depends on your actual needs! There is always the old fashioned manual way of setting up preferred owners for your workloads in order to distribute them across the cluster.
Does Node Fairness and SCVMM Dynamic Optimization are the same? Can be used in parallel or combined?
Node Fairness and SCVMM Dynamic Optimization are not the same. Dynamic Optimization can be applied manually or in a schedule. Node Fairness cannot be used within a cluster when it is managed by SCVMM, Dynamic Optimization takes over automatically.
How can I set it up?
We have 2 options here:
- Using Powershell
- Using Failover Cluster Manager Console
To configure Node Fairness using FCL Manger Console, follow the steps below:
1.In Failover Cluster Manager Console, right-click Cluster and click on Properties
2. Navigate to tab “Balancer” and enable or disable Node Fairness, configure aggressiveness and VM Load Balancing Method.
More information can be found in the links below: