Protected: htb-crossfit-nl

This content is password protected. To view it please enter your password below:

Posted on

transfer-fsmo-roles

 

How to Transfer FSMO Roles in Server 2016/2019 Using Powershell

In any Active Directory environment it’s always a good practice to have some form of redundancy and the resiliency to go along with it. In the case of FSMO Roles (Flexible Single Master Operation Roles), it’s an excellent idea to have them scattered across multiple Domain Controllers. The idea of “having all of your eggs in one basket” applies here and it’s something that we definitely want to avoid if we can control it. So in this article I am going to show you how to transfer FSMO Roles in Server 2019 Using Powershell. In case you’re wondering, this is also compatible with Server 2016, Server 2012R2 and even 2008R2.

What Are Active Directory FSMO Roles And What Do They Do

If you’re new to the world of Active Directory administration, you might have heard the term FSMO roles (pronounced “fizzmo”). FSMO roles are the roles needed to keep an Active Directory environment healthy and running smoothly. There are 5 Flexible Master Operation Roles in total. Here’s what they are and what they do:

  • PDC Emulator Role
    • This role is the most used of all FSMO roles and has the widest range of functions
    • The PDC Emulator is the authoritative DC in the domain and the domain source for time synchronization for all other domain controllers
    • The PDC Emulator changes passwords, responds to authentication requests and manages Group Policy Objects
  • RID Master Role (Relative ID)
    • The RID Master is the single DC responsible for processing RID Pool requests from all domain controllers within a given domain
    • Responds to requests by retrieving RIDs from the domain’s unallocated RID pool and assigns them to the pool of the requesting DC
  • Infrastructure Master Role
    • The Infrastructure Master role is to ensure that cross-domain object references are correctly handled
  • Schema Master Role
    • The Schema Master Role’s purpose is to replicate schema changes to all other domain controllers in the forest
    • Typical implementations that involve schema changes are Exchange Server, SCCM, Skype for Business etc.
  • Domain Naming Master Role
    • This role processes all changes to the namespace
    • Adding subdomains is an example of Domain Naming Master Role in use

How to Query FSMO Roles

Before we decide to change any FSMO roles, we’ll want to check which roles belong to which Domain Controllers. To do this we’ll perform the following steps.

  • Open Powershell Window
  • Type: netdom /query fsmo

Netdom Query FSMO

 

Why It’s Important To Move FSMO Roles Across Multiple DCs

It’s important to make sure you move FSMO roles across multiple domain controllers in your environment. As I mentioned before, you don’t want to keep your eggs in one basket in the event that the server goes down for any number of reasons. A good rule to keep in mind that I learned early on is: Two is one and one is none. This means that you should always strive to have some form of redundancy with everything in IT.

Transfer FSMO Roles Using Powershell

Another thing to note is that you must have the ActiveDirectory Module imported into Powershell for it to work. Domain Controllers will have it by default.

In my example above we have all of our eggs in one basket so let’s use Powershell to move the roles to a different DC. The single command to transfer fsmo roles is:

1
Move-ADDirectoryServerOperationMasterRole -Identity “Target_DC_Name” –OperationMasterRole 0,1,2,3,4 -Confirm:$false -Force

Move-ADDirectoryServerOperationMasterRole

 

The above is a table of which number corresponds to which roles. This is critical to know beforehand because you don’t want to inadvertently transfer the wrong FSMO role to an unwanted domain controller.

If you’ve searched around on how to move FSMO roles there is a lot content out there and shows you how to do it from the GUI but this method seems soo much easier. I prefer this because of the simplicity and ease of the command.

Hopefully you were able to get what you were looking for and now you know how to use Powershell to transfer FSMO roles should you ever need to. If you like using Powershell or want to get more involved, check out our gallery of real world scripts. Also make sure you head over to this Youtube Channel for general sysadmin content as well.

Protected: htb-time-private

This content is password protected. To view it please enter your password below:

Posted on

Protected: htb-bucket-nl

This content is password protected. To view it please enter your password below:

Posted on

clean-up-hyper-v-checkpoint

How to Clean Up After a Failed Hyper-V Checkpoint

Hyper-V’s checkpointing system typically does a perfect job of coordinating all its moving parts. However, it sometimes fails to completely clean up afterward. That can cause parts of a checkpoint, often called “lingering checkpoints”, to remain. You can easily take care of these leftover bits, but you must proceed with caution. A misstep can cause a full failure that will require you to rebuild your virtual machine. Read on to find out how to clean up after a failed checkpoint.

Avoid Mistakes When Cleaning up a Hyper-V Checkpoint

The most common mistake is starting your repair attempt by manually merging the AVHDX file into its parent. If you do that, then you cannot use any of Hyper-V’s tools to clean up. You will have no further option except to recreate the virtual machine’s files. The “A” in “AVHDX” stands for “automatic”. An AVHDX file is only one part of a checkpoint. A manual file merge violates the overall integrity of a checkpoint and renders it unusable. A manual merge of the AVHDX files should almost be the last thing that you try.

Also, do not start off by deleting the virtual machine. That may or may not trigger a cleanup of AVHDX files. Don’t take the gamble.

Before you try anything, check your backup application. If it is in the middle of a backup or indicates that it needs attention from you, get through all of that first. Interrupting a backup can cause all sorts of problems.

How to Cleanup a Failed Hyper-V Checkpoint

We have multiple options to try, from simple and safe to difficult and dangerous. Start with the easy things first and only try something harder if that doesn’t work.

Method 1: Delete the Checkpoint

If you can, right-click the checkpoint in Hyper-V Manager and use the Delete Checkpoint or Delete Checkpoint Subtree option:

Delete a Hyper-V checkpoint in the GUI

This usually does not work on lingering checkpoints, but it never hurts to try.

Sometimes the checkpoint does not present a Delete option in Hyper-V Manager.

Missing checkpoint delete option in Hyper-V GUI

Sometimes, the checkpoint doesn’t even appear.

In any of these situations, PowerShell can usually see and manipulate the checkpoint.

Easiest way:

You can remove all checkpoints on a host at once:

If the script completes without error, you can verify in Hyper-V Manager that it successfully removed all checkpoints. You can also use PowerShell:

This clears up the majority of leftover checkpoints.

Method 2: Create a New Checkpoint and Delete It

Everyone has had one of those toilets that won’t stop running. Sometimes, you get lucky, and you just need to jiggle the handle to remind the mechanism that it needs to drop the flapper ALL the way over the hole. Method 3 is something of a “jiggle the handle” fix. We just tap Hyper-V’s checkpointing system on the shoulder and remind it what to do.

In the Hyper-V Manager interface, right-click on the virtual machine (not a checkpoint), and click Checkpoint:

Create a new Hyper-V checkpoint

Now, at the root of all of the VM’s checkpoints, right-click on the topmost and click Delete checkpoint subtree:

Delete a Hyper-V checkpoint subtree

If this option does not appear, then our “jiggle the handle” fix won’t work. Try to delete the checkpoint that you just made, if possible.

The equivalent PowerShell is  Checkpoint-VM -VMName demovm followed by  Remove-VMCheckpoint -VMName demovm.

Regroup Before Proceeding

I do not know how pass-through disks or vSANs affect these processes. If you have any and the above didn’t work, I recommend shutting the VM down, disconnecting those devices, and starting the preceding steps over. You can reconnect your devices afterward.

If your checkpoint persists after trying the above, then you now face some potentially difficult choices. If you can, I would first try shutting down the virtual machine, restarting the Hyper-V Virtual Machine Management service, and trying the above steps while the VM stays off. This is a bit more involved “jiggle the handle” type of fix, but it’s also easy. If you want to take a really long shot, you can also restart the host. I do not expect that to have any effect, but I have not yet seen everything.

Take a Backup!

Up to this point, we have followed non-destructive procedures. The remaining fixes involve potential data loss. If possible, back up your virtual machine. Unfortunately, you might only have this problem because of a failed backup. In that case, export the virtual machine. I would personally shut the VM down beforehand so as to only capture the most recent data.

Virtual Machine Backup

If you have a good backup or an export, then you cannot lose anything else except time.

Method 3: Reload the Virtual Machine’s Configuration

This method presents a moderate risk of data loss. It is easy to make a mistake. Check your backup! This is a more involved “jiggle the handle” type of fix.

Procedure:

  1. Shut the VM down
  2. Take note of the virtual machine’s configuration file location, its virtual disk file names and locations, and the virtual controller positions that connect them (IDE 1 position 0, SCSI 2 position 12, etc.)
    Hyper-V Manager virtual disk information
  3. On each virtual disk, follow the AVHDX tree, recording each file name, until you find the parent VHDX. In Hyper-V Manager, do this with the Inspect button on the VM’s disk sheet, then the Inspect Parent on each subsequent dialog box that opens.
    Hyper-V's inspect virtual disk dialog
  4. Modify the virtual machine to remove all of its hard disks. If the virtual machine is clustered, you’ll need to do this in Failover Cluster Manager (or PowerShell). It will prompt to create a checkpoint, but since you already tried that, I would skip it.
    Remove virtual hard disk in Hyper-V Manager
  5. Export the virtual machine configuration
    Export a VM in Hyper-V
  6. Delete the virtual machine. If the VM is clustered, record any special clustering properties (like Preferred Hosts), and delete it from Failover Cluster Manager.
    Delete virtual machine in Hyper-V
  7. Import the virtual machine configuration from step 5 into the location you recorded in step 3. When prompted, choose the Restore option.
    Start a virtual machine import in Hyper-VRestore mode of Hyper-V Manager's virtual machine import function
  8. This will bring back the VM with its checkpoints. Start at method 1 and try to clean them up.
  9. Reattach the VHDX. If, for some reason, the checkpoint process did not merge the disks, do that manually first. If you need instructions, look at the section after the final method.
  10. Re-establish clustering, if applicable.

We use this method to give Hyper-V one final chance to rethink the error of its ways. After this, we start invoking manual processes.

Method 4: Restore the VM Configuration and Manually Merge the Disks

For this one to work, you need a single good backup of the virtual machine. It does not need to be recent. We only care about its configuration. This process has a somewhat greater level of risk as method 4. Once we introduce the manual merge process, the odds of human error increase dramatically.

  1. Follow steps 1, 2, and 3 from method 3 (turn VM off and record configuration information). If you are not certain about the state of your backup, follow steps 5 and 6 (export and delete the VM). If you have confidence in your backup, or if you already followed step 4 and still have the export, then you can skip step 5 (export the VM).
  2. Manually merge the VM’s virtual hard disk(s) (see the section after the methods for directions). Move the final VHDX(s) to a safe location. It can be temporary.
  3. Restore the virtual machine from backup. I don’t think that I’ve ever seen a Hyper-V backup application that will allow you to only restore the virtual machine configuration files, but if one exists and you happen to have it, use that feature.
  4. Follow whatever steps your backup application needs to make the restored VM usable. For instance, Altaro VM Backup for Hyper-V restores your VM as a clone with a different name and in a different location unless you override the defaults.
  5. Remove the restored virtual disks from the VM (see step 4 of Method 3). Then, delete the restored virtual hard disk file(s) (they’re older and perfectly safe on backup).
  6. Copy or move the merged VHDX file from step 2 back to its original location.
  7. On the virtual machine’s Settings dialog, add the VHDX(s) back to the controllers and locations that you recorded in step 1.
    Add a virtual disk to a virtual machine in Hyper-V Manager.
  8. Check on any supporting tools that identify VMs by ID instead of name (like backup). Rejoin the cluster, if applicable.

This particular method can be time-consuming since it involves restoring virtual disks that you don’t intend to keep. As a tradeoff, it retains the major configuration data of the virtual machine. Altaro VM Backup for Hyper-V will use a different VM ID from the original to prevent collisions, but it retains all of the VM’s hardware IDs and other identifiers such as the BIOS GUID. I assume that other Hyper-V backup tools exhibit similar behavior. Keeping hardware IDs means that your applications that use them for licensing purposes will not trigger an activation event after you follow this method.

Method 5: Rebuild the VM’s Configuration and Manually Merge the Disks

If you’ve gotten to this point, then you have reached the “nuclear option”. The risk of data loss is about the same as method 5. This process is faster to perform but has a lot of side effects that will almost certainly require more post-recovery action on your part.

  1. Access the VM’s settings page and record every detail that you can from every property sheet. That means CPU, memory, network, disk, file location settings… everything. You definitely must gather the VHDX/AVHDX connection and parent-child-grandchild (etc.) order (method 3, step 3). If your organization utilizes special BIOSGUID settings and other advanced VM properties, then record those as well. I assume that if such fields are important to you that you already know how to retrieve them. If not, you can use my free tool.
  2. Check your backups and/or make an export.
  3. Delete the virtual machine (Method 3 step 6 has a screenshot, mind the note about failover clustering as well).
  4. Recreate the virtual machine from the data that you collected in step 1, with the exception of the virtual hard disk files. Leave those unconnected for now.
  5. Follow the steps in the next section to merge the AVHDX files into the root VHDX
  6. Connect the VHDX files to the locations that you noted in step 1 (Method 5 step 7 has a screenshot).
  7. Check on any supporting tools that identify VMs by ID instead of name (like backup). Rejoin the cluster, if applicable.
  8. In the VM’s guest operating system, check for and deal with any problems that arise from changing all of the hardware IDs.

Since you don’t have to perform a restore operation, it takes less time to get to the end of this method than method 5. Unfortunately, swapping out all of your hardware IDs can have negative impacts. Windows will need to activate again, and it will not re-use the previous licensing instance. Other software may react similarly, or worse.

How to Manually Merge AVHDX Files

I put this part of the article near the end for a reason. I cannot over-emphasize that you should not start here.

Prerequisites for Merging AVHDX Files

If you precisely followed one of the methods above that redirected you here, then you already satisfied these requirements. Go over them again anyway. If you do not perform your merges in precisely the correct order, you will permanently orphan data.

  1. Merge the files in their original location. I had you merge the files before moving or copying them for a reason. Each differencing disk (the AVHDXs) contains the FULL path of their parent. If you relocate them, they will throw errors when you attempt to merge them. If you can’t get them back to their original location, then read below for steps on updating each of the files.
  2. You will have the best results if you merge the files in the order that they were created. A differencing disk knows about its parent, but no parent virtual disk file knows about its children. If you merge them out of order, you can correct it — with some effort. But, if any virtual hard disk file changes while it has children, you will have no way to recover the data in those children.

If merged in the original location and in the correct order, AVHDX merging poses no risks.

Manual AVHDX Merge Process in PowerShell

I recommend that you perform merges with PowerShell because you can do it more quickly. Starting with the AVHDX that the virtual machine used as its active disk, issue the following command:

Once that finishes, move to the next file in your list. Use tab completion! Double-check the file names from your list!

Once you have nothing left but the root VHDX, you can attach it to the virtual machine.

Manual AVHDX Merge Process in Hyper-V Manager

Hyper-V Manager has a wizard for merging differencing disks. If you have more than a couple of disks to merge, you will find this process tedious.

  1. In Hyper-V Manager, click Edit disk in the far right pane.
    Edit disk in Hyper-V Manager
  2. Click Next on the wizard’s intro page if it appears.
  3. Browse to the last AVHDX file in the chain.
    Browse to virtual disk in Hyper-V Manager
  4. Choose the Merge option and click Next.
    Disk merge option in Hyper-V Manager
  5. Choose to merge directly to the parent disk and click Next.
    Option to merge virtual disk to parent in Hyper-V Manager
  6. Click Finish on the last screen.
  7. Repeat until you only have the root VHDX left. Reattach it to the VM.

Fixing Parent Problems with AVHDX Files

In this section, I will show you how to correct invalid parent chains. If you have merged virtual disk files in the incorrect order or moved them out of their original location, you can correct it.

The above cmdlet will work if the disk files have moved from their original locations. If you had a disk chain of A->B->C and merged B into A, then you can use the above to set the parent of C to A, provided that nothing else happened to A in the interim.

The virtual disk system uses IDs to track valid parentage. If a child does not match to a parent, you will get the following error:

You could use the IgnoreIdMismatch switch to ignore this message, but a merge operation will almost certainly cause damage.

Alternatively, if you go through the Edit Disk wizard as shown in the manual merge instructions above, then at step 4, you can sometimes choose to reconnect the disk. Sometimes though, the GUI crashes. I would not use this tool.

Errors Encountered on AVHDX Files with an Invalid Parent

The errors that you get when you have an AVHDX with an invalid parent usually do not help you reach that conclusion.

In PowerShell:

Because it lists the child AVHDX in both locations, along with an empty string where the parent name should appear, it might seem that the child file has the problem.

In Hyper-V Manager, you will get an error about “one of the command line parameters”. It will follow that up with a really unhelpful “Property ‘MaxInternalSize’ does not exist in class ‘Msvm_VirtualHardDiskSettingData’. All of this just means that it can’t find the parent disk.

Use Set-VHD as shown above to correct these errors.

Other Checkpoint Cleanup Work

Checkpoints involve more than AVHDX files. Checkpoints also grab the VM configuration and sometimes its memory contents. To root these out, look for folders and files whose names contain GUIDs that do not belong to the VM or any surviving checkpoint. You can safely delete them all. If you do not feel comfortable doing this, then use Storage Migration to move the VM elsewhere. It will only move active files. You can safely delete any files that remain.

What Causes Checkpoints to Linger?

I do not know that anyone has ever determined the central cause of this problem. We do know that Hyper-V-aware backups will trigger Hyper-V’s checkpointing mechanism to create a special backup checkpoint. Once the program notifies VSS that the backup has completed, it should automatically merge the checkpoint. Look in the event viewer for any clues as to why that didn’t happen.

force sync DFSR SYSVOL

Force synchronization for DFSR-replicated SYSVOL

One of my clients had a problem with processing GPO on client computers. Different computers applied different settings from the same GPO but from different domain controllers. All tests related to replication was successful, all GPOs are applied, but replication between domain controllers was a problem, and because of that most clients had a different GPO configuration.

I had a similar problem with a newly promoted domain controller which I previously blogged about here.

Scenarios where this problem typically occurs:

  • Replication was moved  from FRS to DFSR
  • Demoting an old domain controller in the environment
  • When there is a problem with the DFS replication of the SYSVOL folder

To solve this problem, I had to manually perform an authoritative synchronization between the domain controllers.

I am including steps for authoritative and non-authoritative synchronization, but before we get started we need to see the state of the replication.

Steps:

  1. Find the state of the replication state. Typically the problem DCs will be at 0 or 2. The goal is to get to state 4.
  2. Get to State 2
  3. Get to State 4

Find the state of the replication of all DCs

Wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo get replicationgroupname,replicatedfoldername,state

0 = Uninitialized
1 = Initialized
2 = Initial Sync
3 = Auto Recovery
4 = Normal
5 = In Error

Non-authoritative synchronization of DFSR-replicated SYSVOL

  • Stop the DFS Replication service ( net stop dfsr).
  • In the ADSIEDIT.MSC tool modify the following distinguished name (DN) value and attribute on each of the domain controllers that you want to make non-authoritative:
    CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<the server name>,OU=Domain Controllers,DC=<domain>
    msDFSR-Enabled=FALSE
  • Tip : Easiest is to adsiedit.msc connect to DC=***s,DC=nl , and then upwards

  • Force Active Directory replication throughout the domain  ( repadmin /syncall primary_dc_name /APed )
  • Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
    DFSRDIAG POLLAD 
  • You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated (Open up event viewer and navigate to Applications and Services Logs -> DFS Replication).
  • On the same DN from Step 1, set:
    msDFSR-Enabled=TRUE
  • Force Active Directory replication throughout the domain ( repadmin /syncall primary_dc_name /APed).
  • Start the DFS Replication service ( net start dfsr).
  • Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
  • You will see Event ID 4614 and 4604 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done non-authoritative sync of SYSVOL.
  • Run Wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo get replicationgroupname,replicatedfoldername,stat and make sure the state is at 4. If it is at 2, it may take some time to reach state 4. Wait a few minutes and try again until all DCs are at state 4.

Authoritative synchronization of DFSR-replicated SYSVOL

  1. Find the PDC Emulator (Elevated Command Prompt: netdom query fsmo ) – which is usually the most up to date for SYSVOL contents. Or the server holding all the policies and scripts. Consider this the primary server.
  2. Stop the DFS Replication service ( net stop dfsr) on the primary server.
  3. On the primary server, In the ADSIEDIT.MSC tool, modify the following DN and two attributes to make authoritative:
    CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<the server name>,OU=Domain Controllers,DC=<domain>
    msDFSR-Enabled=FALSE
    msDFSR-options=1
  4. Modify the following DN and single attribute on all other domain controllers in that domain:
    CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>
    msDFSR-Enabled=FALSE
  5. Force Active Directory replication throughout the domain and validate its success on all DCs ( repadmin /syncall primary_dc_name /APed). Probably need to run the same command 3-4 times.
  6. Start the DFSR service set as authoritative ( net start dfsr) on the primary DC.
  7. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated (Open up event viewer and navigate to Applications and Services Logs -> DFS Replication).
  8. On the same DN from Step 1, set:
    msDFSR-Enabled=TRUE
  9. Force Active Directory replication throughout the domain and validate its success on all DCs ( repadmin /syncall primary_dc_name /APed ). Probably need to run the same command 3-4 times.
  10. Run the following command from an elevated command prompt on the same server that you set as authoritative (primary server):
    DFSRDIAG POLLAD 
  11. Wait a few minutes you will see Event ID 4602 in the DFSR event log (Open up event viewer and navigate to Applications and Services Logs -> DFS Replication) indicating SYSVOL has been initialized. That domain controller has now done an authoritative sync of SYSVOL.
  12. Start the DFSR service on the other non-authoritative DCs ( net start dfsr). You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated on each of them.
  13. Modify the following DN and single attribute on all other domain controllers in that domain:
    CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>
    msDFSR-Enabled=TRUE
  14. Run the following command from an elevated command prompt on all non-authoritative DCs (i.e. all but the formerly authoritative one):
  15. Verify you see Event ID 2002 and 4602 on all other domain controllers.
  16. Run Wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo get replicationgroupname,replicatedfoldername,stat and make sure the state is at 4. If it is at 2, it may take some time to reach state 4. Wait a few minutes and try again until all DCs are at state 4.
If setting the authoritative flag on one DC, you must non-authoritatively synchronize all other DCs in the domain. Otherwise, you will see conflicts on DCs, originating from any DCs where you did not set auth/non-auth and restarted the DFSR service. For example, if all logon scripts were accidentally deleted and a manual copy of them was placed back on the PDC Emulator role holder, making that server authoritative and all other servers non-authoritative would guarantee success and prevent conflicts. If making any DC authoritative, the PDC Emulator as authoritative is preferable, since its SYSVOL contents are usually most up to date. The use of the authoritative flag is only necessary if you need to force synchronization of all DCs. If only repairing one DC, simply make it non-authoritative and do not touch other servers. This article is designed with a 2-DC environment in mind, for simplicity of description. If you had more than one affected DC, expand the steps to includeALL of those as well. It also assumes you have the ability to restore data that was deleted, overwritten, damaged, etc. previously if this is a disaster recovery scenario on all DCs in the domain.

After these actions, all problems with GPO processing and SYSVOL replication disappeared. 🙂

same solution in my own words : see below

SYSVOL Replication Error on Windows 2012 R2

November 5, 2017 at 12:19 pm 4 comments

Hi Guys

Recently we migrated  one of our customer’s  active directory domain controllers to a virtualized environment. During the DC migration  my colleague noticed that the SYSVOL and NETLOGON folders are not replicating it’s contents from the existing domain controller. Thus he copied the contents manually. But after some time client started reporting error like;

  • The Group Policy is not getting updated or Propagated to all the workstations / users.
  • Logon Scripts stopped working.

Thus when we digged in to the problem we were able to track down the issue to DFSR based sysvol replication, Most importantly the old DC was not replicating for almost 1300 days approximately(Figure.1) The below event ID’s helped us to track down the issue:

 

 

 

 

 

So when we started troubleshoot we tried to ran the commands stated in the Eventviewer(refer attached file) but no avail.

Also we ran the below command
For /f %i IN (‘dsquery server -o rdn’) do @echo %i && @wmic /node:”%i” /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo WHERE replicatedfoldername=’SYSVOL share’ get replicationgroupname,replicatedfoldername,state

Strangely the status on all the server showing 2 which is Initial Sync. (One of the reason for the problem) .Also in our MaxOfflineTimeInDays more than 1000 days. But
By default in Windows the  is set to 60 Days. In our case we need to extend it upto 1800 days where there was an offset of more than 1000. so we ran the command to force the servers to allow the content freshness for more than 1000 days.

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=1800

(Do not forget to bring it back the original value of 60 Days)

But sill no avail. Then we decided to Authoritative restore of the SYSVOL folders. We ran the below command set which were extracted from the MS KB:https://support.microsoft.com/en-us/help/2218556/how-to-force-an-authoritative-and-non-authoritative-synchronization-fo)


Do this step on the PDC Emulator Role

Stop the DFSR Service

#net stop dfsr

Open the ADSIEDIT.MSC tool, modify the following DN and two attributes on the domain controller you want to make authoritative (preferably the PDC Emulator, which is usually the most up to date for SYSVOL contents):

CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<the server name>,OU=Domain Controllers,DC=<domain>

msDFSR-Enabled=FALSE
msDFSR-options=1

Modify the following DN and single attribute on all other domain controllers in that domain:

CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>

msDFSR-Enabled=FALSE

Stop the DFSR service on all the remaining controllers

#net stop dfsr

Force Active Directory replication throughout the domain and validate its success on all DCs.

Start the DFSR service set as authoritative:(On the PDC emulator)

#net start dfsr

You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.

On the same DN from Step 1, set:

msDFSR-Enabled=TRUE

Run the below command to force Active Directory replication throughout the domain and validate its success on all DCs.

#repadmin /syncall /AdP

Run the following command from an elevated command prompt on the same server that you set as authoritative:

DFSRDIAG POLLAD

You will see Event ID 4602 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D4” of SYSVOL.

Start the DFSR service on the other non-authoritative DCs.

#net start dfsr

You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated on each of them.

Modify the following DN and single attribute on all other domain controllers in that domain:

CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>

msDFSR-Enabled=TRUE

Run the following command from an elevated command prompt on all non-authoritative DCs (i.e. all but the formerly authoritative one):

DFSRDIAG POLLAD

————————————————————————————-

Voila we could see the replication started working and when we checked the replication status  via the command

For /f %i IN (‘dsquery server -o rdn’) do @echo %i && @wmic /node:”%i” /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo WHERE replicatedfoldername=’SYSVOL share’ get replicationgroupname,replicatedfoldername,state

it shows  the status 4 (which is all synced)

I am listing the below articles which helped me in the initial troubleshooting.

https://support.microsoft.com/en-us/help/967336/a-newly-promoted-windows-2008-domain-controller-may-fail-to-advertise

http://www.itprotoday.com/windows-8/fixing-broken-sysvol-replication

https://support.microsoft.com/en-us/help/2218556/how-to-force-an-authoritative-and-non-authoritative-synchronization-fo

http://kpytko.pl/active-directory-domain-services/non-authoritative-sysvol-restore-dfs-r

http://kpytko.pl/active-directory-domain-services/authoritative-sysvol-restore-dfs-r/

Good Luck

Jacco Straathof

Hyper-V S2D and ReFs

Configuring Storage Spaces Direct and Resilient File System (ReFS)

By | January 16th, 2020|0 Comments
  • Configuring Storage Spaces Direct and Resilient File System (ReFS)

This blog on Hyper-V Storage Configuration is a three-part series. We will cover a number of different storage configurations with Microsoft Hyper-V, including their characteristics, features, configuration, and use cases.

In the first part, we discussed the Hyper-V related storage technologies – Direct Attached Storage, Shared Storage, Cluster Shared Volumes, Storage Spaces Direct & ReFS and looked at the process of configuring Hyper-V Direct Attached Storage.

In the previous post – second part, we discussed the process of configuring Hyper-V Shared Storage and Cluster Shared Volumes for Hyper-V.

In this last part – we’ll look at the process of configuring and managing Storage Spaces Direct and Resilient File System (ReFS).

Configuring Storage Spaces Direct

As mentioned in the overview portion, Storage Spaces Direct or S2D is a software-defined storage solution that provides the ability to have shared storage that is pooled together from internal storage to each Hyper-V cluster node. It is extremely important with Storage Spaces Direct to purchase a validated hardware/software solution where the hardware has been validated to work with Storage Spaces Direct. This is an extremely important design consideration to keep in mind. Outside of that basic requirement, there are also other considerations and requirements:

  • Minimum of (2) servers, maximum of 16
  • All servers are recommended to be of the same manufacturer and model
  • Intel Nehalem class or higher/AMD EPYC or later
  • 4 GB of RAM per TB of cache drive capacity on each server for S2D metadata
  • Any boot device supported by Windows Server
  • NICs that are RDMA capable, iWARP or RoCE
  • Drives supported include direct-attached SATA, SAS, or NVME drives that are physically attached. Cache and capacity drives are required as part of the configuration. No shared-SAS is supported. RAID cards must support simple pass-through mode.

Configuring Storage Spaces Direct and ReFS
 

Make sure the storage configuration for each S2D host is compatible (Image courtesy of Microsoft)

Beginning Storage Spaces Direct Configuration

Before you start configuring Storage Spaces Direct, you need to ensure your hard drives are free of any other partitions or data before getting started. The following PowerShell script as provided by Microsoft will clean all drives beside the OS boot drive.
# Fill in these variables with your values
$ServerList = “Server01”, “Server02”, “Server03”, “Server04”

Invoke-Command ($ServerList) {
Update-StorageProviderCache
Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue
Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue
Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction SilentlyContinue
Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue
Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne $true | ? PartitionStyle -ne RAW | % {
$_ | Set-Disk -isoffline:$false
$_ | Set-Disk -isreadonly:$false
$_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false
$_ | Set-Disk -isreadonly:$true
$_ | Set-Disk -isoffline:$true
}
Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where IsSystem -Ne $True | Where PartitionStyle -Eq RAW | Group -NoElement -Property FriendlyName
} | Sort -Property PsComputerName, Count

Validating and Create Your Cluster

You want to make sure your potential cluster nodes will pass the requirements for enabling Storage Spaces Direct. Microsoft has this covered with the Test-Cluster cmdlet.

  • Test-Cluster –Node <machinename1, machinename2,=”” machinename3,=”” machinename4=””>–Include “Storage Spaces Direct”, “Inventory”, “Network”, “System Configuration”</machinename1,>

After validating your cluster using the Test-Cluster cmdlet and everything passing, you need to actually create the cluster. You can do that with the following. Note the NoStorage option specified. We want to do this since we need to create the storage using the special Storage Spaces Direct cmdlets and syntax.

  • New-Cluster –Name < ClusterName > –Node < MachineName1,MachineName2,MachineName3,MachineName4 > –NoStorage

Enable Storage Spaces Direct in your Newly Formed Cluster

Storage Spaces Direct has a special cmdlet to put the storage system into the Storage Spaces Direct mode and do some things automatically, including:

  • Create a storage pool
  • Configures Storage Spaces Direct caches automatically. It will look at the available drive types and automatically choose the fastest drives as cache drives.
  • Creates two tiers as default tiers, including capacity and performance tiers

Use the following cmdlet to enable Storage Spaces Direct:

  • Enable-ClusterStorageSpacesDirect –CimSession < ClusterName >

Configuring Storage Spaces Direct and ReFS
 

Enabling Storage Spaces Direct on a Hyper-V Cluster

Create Storage Spaces Direct Storage Pool

After enabling Storage Spaces Direct, you need to create Storage Pool. This can be accomplished with a PowerShell cmdlet:

  • New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true

Managing Storage Spaces Direct

Managing Storage Spaces Direct is easily accomplished using either PowerShell or the new management tool released by Microsoft, Windows Admin Center.

Windows Admin Center (WAC) is a web-based modern UI that is built from the ground up to manage Microsoft’s HCI platform with Windows Server 2019 and Storage Spaces Direct. It is the method to interact with and manage Storage Spaces Direct with a GUI tool.

Creating a Storage Spaces Direct Volume with WAC

Let’s see how we can create a new Storage Spaces Direct Volume using the Windows Admin Center.

Configuring Storage Spaces Direct and ReFS
 

Using the Windows Admin Center to create a new Storage Spaces Direct Volume
On the Create Volume dialog box, choose the name of the volume, resiliency setting, size and whether you want to use deduplication and compression.

Configuring Storage Spaces Direct and ReFS
 

Creating a new Storage Spaces Direct Volume and configuring the settings for the new volume
New S2D Volume is created using the Windows Admin Center. We can see the new Volume in WAC after creation.

Configuring Storage Spaces Direct and ReFS
 

New Storage Spaces Direct Volume created using Windows Admin Center

Resilient File System (ReFS)

Resilient File System (ReFS) touts many advantages and improvements over NTFS, including the following:

  • Integrity-streams – ReFS uses checksums for metadata and optionally for file data, it can detect corruptions. This includes storage spaces integrations where it can automatically repair detected corruption.
  • Data integrity scrubber proactively corrects errors – This periodically scans the volume and identifies corruptions and triggers a repair of the volume.
  • Real-time tier optimization – On Storage Spaces Direct, this not only tweaks capacity but also delivers high performance to your workloads. This includes creating tiers for your data to live on both for hot data and cold data. Hot data would be data that needs fast storage and cold data that is a capacity tier. So, the data gets written to the hot tier first and then moved over to the capacity tier.
  • Accelerated virtual machine operations – Hyper-V virtualized workloads specifically benefit from ReFS. With block cloning technology built into ReFS 3.1, block data is no longer moved but simply referenced with pointers to blocks. This tremendously increases performance on creating fixed-size VHDs based on sparse VDL technology that allows for zeroing out files rapidly. The new block cloning technology enables rapid merge operations for checkpoints in Hyper-V
  • Scalability – Scalability improvements starting with Windows Server 2016 ReFS also allow for extremely large datasets without any impact on performance, unlike previous file systems.

New features for Resilient File System (ReFS) with Windows Server 2019:

  • Deduplication and Compression – This leads to huge space savings with Hyper-V virtual machines since there is a lot of duplication at the block level for virtual machines, especially in VDI implementations.

***Note*** – As mentioned above, it is not recommended to use Resilient File System (ReFS) with Cluster Shared Volumes as this causes all I/O to be performed in File Redirection Mode which can lead to huge performance issues. Resilient File System is the recommended file system for use with Storage Spaces Direct implementations used with Hyper-V.

Implementing Resilient File System is as simple as formatting a new volume with the new file system. This can be done using the disk management console or by using diskpart.

Configuring Storage Spaces Direct and ReFS
 

Formatting a new Windows Server 2019 volume with ReFS

Summarizing the Hyper-V Storage Configuration Guide

We have covered a lot of territory with the Hyper-V storage configuration guide. There are many different types of storage configuration options available with Windows Server 2019 Hyper-V. These include direct-attached storage, shared storage, cluster shared volumes, storage spaces direct, and ReFS.

Depending on the type of storage that is chosen in the Hyper-V environment, each storage configuration has a different set of characteristics including the ease of which they can be configured, the requirements, complexity of the solution, and features and capabilities they each possess.

While direct-attached storage is the easiest type of Hyper-V storage to make use of, it is fairly limited in the enterprise features that can be utilized when using it. When you want to begin using the true enterprise Hyper-V features that come along with Hyper-V Cluster implementations, you have to step up into shared storage or Storage Spaces Direct implementations to take advantage of these.

Other types of Hyper-V storage technologies like the Cluster Shared Volume (CSV) and Resilient File System (ReFS) are not necessarily dependent on the storage implementation, however, you need to make a note of when these should be used as is the case with using ReFS along with Cluster Shared Volumes implemented with Shared Storage as this can lead to performance issues.

Windows Server 2019 brings to the table the most diverse storage feature set of any Windows Server release and allows you as the IT admin to have a number of options to fit your specific use cases. By being informed on how these technologies are implemented and their various features, you are well-equipped to make a good decision when choosing a solution to store your data in a Hyper-V environment.

Hyper-V Shared Storage

Configuring Hyper-V Shared Storage and Cluster Shared Volumes

By | January 14th, 2020|0 Comments
  • Hyper-V Shared Storage and Cluster Shared Volumes

This blog on Hyper-V Storage Configuration is a three-part series. We will cover a number of different storage configurations with Microsoft Hyper-V, including their characteristics, features, configuration, and use cases.

In the previous post – first part, we discussed the Hyper-V related storage technologies – Direct Attached Storage, Shared Storage, Cluster Shared Volumes, Storage Spaces Direct & ReFS and looked at the process of configuring Hyper-V Direct Attached Storage.

In this second part, we’ll discuss the process of configuring Hyper-V Shared Storage and the process of configuring Cluster Shared Volumes for Hyper-V.

Configuring Hyper-V Shared Storage

Once you decide to take your Hyper-V environment from running on top of standalone hosts with direct-attached storage and start utilizing a Hyper-V cluster configuration, you will need to start looking at shared storage.

Shared storage is one of the primary requirements needed to configure a Hyper-V cluster. Why is this?

Shared storage is required for Hyper-V clusters as all hosts in the cluster need to be able to see the storage for all the virtual machines being managed by the cluster.

Having shared storage provisioned between the Hyper-V cluster hosts allows you to take advantage of many of the great enterprise features that justify a Hyper-V cluster in the first place. Features like high-availability of the virtual machines running in the Hyper-V cluster as well as mobility of the VMs are two enterprise features that you will no doubt benefit from. Both of these require shared storage.

Hyper-V clusters take advantage of Windows Failover Cluster services running on the servers that are part of the Failover Cluster. The Hyper-V role is installed on the members of the Windows Failover Cluster. Virtual Machines that are running in the Hyper-V Failover Cluster can be added as highly available under the Virtual Machine role. In this way, when a Hyper-V host goes down due to a hardware or other failure, the virtual machine will be migrated to a healthy host in the cluster.

In this scenario the need for shared storage becomes apparent. When storage is shared between all the hosts in the cluster, there is no need to copy files to a different host to bring up the VM. The shared storage between the Hyper-V hosts means the VM files simply stay in place and a healthy host assumes ownership of compute/memory for the VM.

Generally, when thinking about configuring shared storage, this is accomplished by means of a Storage Area Network (SAN) where storage is provisioned on a SAN appliance and the SAN and Hyper-V hosts are connected to one another by means of a high speed (at least 10 GbE) network.

Let’s take a look at configuring shared storage on a couple of Hyper-V hosts that are part of a Hyper-V cluster. We will do this by means of an iSCSI LUN that is presented from a storage device.

To add an iSCSI LUN to a Hyper-V host, we first need to enable and start the Microsoft iSCSI service. You can do that by simply typing the following command:

  • Iscsicpl
  • You will be presented with a message to enable the service and start it.

Configuring Hyper-V Shared Storage

Enabling and starting the Microsoft iSCSI service
After enabling and starting the service, you then need to add the target using the Quick Connect feature to quickly add the iSCSI targets presented.

Configuring Hyper-V Shared Storage

Adding an iSCSI target using the Quick Connection functionality
Configuring Hyper-V Shared Storage

New iSCSI volumes added to Windows Server
When you add the volumes on both hosts that are going to participate in the Hyper-V cluster, the cluster formation process will run several checks on the available disks to ensure disks meet certain requirements and are accessible from all hosts.

Configuring Hyper-V Shared Storage

Checking the disks presented on the Hyper-V cluster nodes to ensure they are properly configured
In the Failover Cluster Manager, under Storage, you will see the shared disks listed as cluster resources.

Configuring Hyper-V Shared Storage

Shared Cluster disks listed in Failover Cluster Manager
Now, you have satisfied the requirements for the Hyper-V cluster having shared disks between the cluster nodes.

As you can see above, one is a Disk Witness in Quorum to provide tie-breaker functionality in a “split-brain” scenario. The other volume is Available Storage for storing resources like virtual machines.

Configuring Cluster Shared Volumes

Another extremely important configuration related to Cluster storage specifically with Hyper-V is, enabling Cluster Shared Volumes (CSV).

What is Cluster Shared Volumes?

These enable multiple nodes in a failover cluster to simultaneously have read-write access to the same LUN that is provisioned as an NTFS volume.

With CSV enabled, clustered roles can failover quickly from one node to another node without changing the drive ownership or dismounting and remounting a volume.

When looking at the architecture of Cluster Shared Volumes, they are a general-purpose, cluster-aware file system that sits on top of NTFS or ReFS (starting in Windows Server 2012 R2). Specifically related to Hyper-V, Cluster Shared Volumes provide special-purpose functionality to the following:

  • Hyper-V virtual machines that have VHD files hosted by a Hyper-V cluster made possible by Windows Failover Cluster services.
  • Scale-out file servers that can host data such as Hyper-V virtual machine files.

Cluster Shared Volumes allow multiple Hyper-V hosts to have simultaneous read-write access to the same shared storage. When a given node performs disk I/O, the node is communicating directly with the storage appliance. However, a single node that is referred to as the coordinator node “owns” the physical disk resource that is associated with the LUN. This coordinator node as displayed in Failover Cluster Manager is designated as Owner Node.

Changes in the CSV volume file system are synchronized with the other members of the Hyper-V cluster. This is done through a special kind of metadata that is shared between the hosts. Examples of CSV activity that is synchronized include Hyper-V virtual machines being created, started, stopped, or deleted. Migration of virtual machines also needs to be synchronized on each of the physical nodes that access the VM.

The synchronization between the hosts is taken care of using SMB 3.0. In cases of storage connectivity failures and certain storage operations that can prevent a Hyper-V host from communicating directly with storage, the node redirects the disk I/O through a cluster network to the coordinator node where the disk is currently mounted. If the coordinator node fails, the disk I/O is queued while another coordinator node is designated that does have access.

When choosing a file system for formatting a Cluster Shared Volume, you need to take this I/O redirection into account along with the type of Hyper-V cluster storage being mounted.

It is highly recommended if you are not using Storage Spaces Direct, to use NTFS instead of ReFS. The reason for this is that when ReFS is used for Cluster Shared Volumes, it always runs in file system redirection mode which means all the I/O is redirected back through the coordinator node for the volume. This can lead to serious performance issues outside of Storage Spaces Direct.

How is the Cluster Shared Volume configured or enabled?

This is an extremely easy part of the process. You can enable Cluster Shared Volumes by right-clicking on the volume you want to use for your virtual machine storage and select Add to Cluster Shared Volumes.

Configuring Hyper-V Shared Storage

Creating a Cluster Shared Volume
Configuring Hyper-V Shared Storage

After adding to Cluster Shared Volume
After you add the volume to a Cluster Shared Volume, the Assigned To column is designated as Cluster Shared Volume.

You can check and make sure you are not operating in File System Redirected Access mode by looking at the properties of the CSV volume.

Configuring Hyper-V Shared Storage

Checking the File System Redirected Access mode
In the next post and the last part of this series, we’ll look at the process of configuring Storage Spaces Direct and Resilient File System (ReFS).

Hyper-V Checkpoints

What are Hyper-V Checkpoints-Part 1: Why and how to use them?

Microsoft’s Hyper-V hypervisor is a powerful virtualization platform for enterprises today looking to run their production workloads on top of Hyper-V. Especially with Windows Server 2019, Hyper-V has continued to mature and contains many tremendously powerful features and enhancements that allow implementing, maintaining, and provisioning servers and other business-critical workloads in a very effective and efficient manner.

One of the very handy and extremely useful features of Hyper-V is the ability to create checkpoints on virtual machines. Checkpoints can be used for many useful purposes in the environment.

This post will be a two-part series.

In this first part, we will take a much closer look at what Hyper-V checkpoints are exactly, why use them, and how to create them using Hyper-Manager and PowerShell.

In part two, we will dig deeper into the inner-workings of checkpoints, types of checkpoints and how they are each used, managing them, and looking into checkpoints vs backup technologies.

How to Use Hyper-V Virtual Machine Checkpoints

Hyper-V Checkpoints are an extremely useful feature in the realm of Hyper-V infrastructure.

In fact, they serve an extremely important role in the lifecycle management of Hyper-V virtual machines.

Let’s take a look at the following important aspects of Hyper-V checkpoints and why each are important to consider.

  • What are Hyper-V Checkpoints?
  • Why Use Hyper-V Checkpoints?
  • How to Create Hyper-V Checkpoints

By understanding these topics in part one, it will lay the foundation of the fundamental basics of Hyper-V checkpoints before moving on to more advanced topics of consideration.

What are Hyper-V Checkpoints?

Before delving into the various topics about Hyper-V checkpoints, let’s first get an understanding of “what” they really are.

“A Hyper-V checkpoint is a “snapshot” of a virtual machine at a point in time”.

In other words, checkpoints give you the ability to “freeze” time for a virtual machine and capture that “state” in a point in time mechanism that you can save for use later, and then revert back to at any time.

Hyper-V checkpoints can typically contain the memory state of the virtual machine, but they can also be created without capturing the memory state of the VM. Another great feature or ability of the Hyper-V checkpoint is, they can be created while a virtual machine is running. So, this point of frozen time and state for the virtual machine can be created without any downtime to the VM or any disruption to end users who may be connecting to the virtual machine for resources. This creates some very interesting and powerful use cases for utilizing the Hyper-V checkpoint functionality.

Why Use Hyper-V Checkpoints?

Now that we understand what a Hyper-V Checkpoint is, “why” and “how” do you use them?

Let’s first look at the why part of the question.

If you have the ability to create a point in time capture of a specific virtual machine state and save that for later, what would you use it for? Actually, there are many potential use cases for having this ability.

However, perhaps the most powerful use of a Hyper-V checkpoint is to have a quick rollback mechanism to use during software updates or upgrades.

A classic example of this is applying Windows Updates to Windows Servers. If you have managed any number of Windows Servers over the years, you know this is a necessary evil that must be taken care of. For the most part, Windows Server patches are fairly stable, however, from time to time, bad patches are released. By utilizing the Hyper-V checkpoint, a checkpoint of the VM can be taken before software patches, updates, or upgrades are performed. If something fails or things go badly, the VM can simply be reverted back to the checkpoint taken prior to the upgrade. This provides an extremely easy and powerful rollback mechanism for applying software patches, updates, and upgrades.

Here are some other use cases for Hyper-V virtual machine checkpoints:

  • Installing new software – Before installing software, especially if it isn’t known what interactions the software may have with other software coexisting on the system, it is a great idea to create a checkpoint just in case problems arise
  • Changing the system configuration – Changing Windows Roles or Features, changing IIS configuration, or even network configuration, route tables, driver changes/updates, file changes, directory moves, DLL registrations, etc warrants having a good state to roll back
  • Apply registry changes – Modifying the registry can be powerful but also dangerous. If the wrong key is modified, deleted, or added, it can lead to major system issues
  • Before troubleshooting an issue – If steps are taken during a troubleshooting session to resolve an issue that ultimately may not resolve the issue, it is good to have a point to go back to before changes were made
  • Dev/STG/UAT environments – Checkpoints are a great tool to use in DEV/STG/UAT environments. Before code is rolled in or changed or testing is implemented, checkpoints provide very quick rollbacks to known good states for virtual machines. This allows developers to roll in code, revert, roll in code again, and keep running this process until the code is at a good point for testing

The above use cases provide really good examples of when checkpoints may be utilized by a Hyper-V administrator. By having a checkpoint in place in the above scenarios, administrators can have extremely quick rollback mechanisms in place that can potentially save hours or days’ worth of time.

Are there virtual machines or situations where it would be best practice not to use Hyper-V checkpoints?

The following points come to mind:

  • Never use checkpoints as a replacement for proper backups (we will address this further in part two)
  • Don’t use checkpoints on domain controllers
  • Be selective when using checkpoints on multi-tiered applications

Not Backups

As mentioned we will look at this in more depth in part two, however, checkpoints are not designed to be or supported as a replacement for properly backing up Hyper-V virtual machines.

Domain Controllers

Using any type of “snapshot” technology on domain controllers can result in very bad things happening in the domain environment. It is always best never to use checkpoints on domain controllers. Rolling back to a checkpoint on a domain controller can lead to a situation called a “USN Rollback” that can result in Active Directory replication being broken.

Multi-tiered applications

Certain applications that are multi-tiered are very difficult if not impossible to properly place a checkpoint due to the highly connected and dependent way the web/application servers are tied to the backend database servers. Using checkpoints in this design architecture can potentially lead to data loss.

Now that we understand the powerful benefits and capabilities that are possible with Hyper-V checkpoints. For those servers where checkpoints do make sense and would be supported, how are Hyper-V checkpoints used?

In examining how Hyper-V checkpoints are used, we will look at the scenario presented above – software updates.

To use Hyper-V checkpoints, a checkpoint would be created before applying the software changes to a target virtual machine. After the checkpoint is applied, the software updates can be run as the failsafe in place with the checkpoint created on the virtual machine. If the software updates are successful, the checkpoint can be deleted. If the software updates fail or create problems on the virtual machine, the checkpoint can be reverted back to in order to return the virtual machine to a pristine condition prior to the updates being applied.

How to Create Hyper-V Checkpoints

Let’s see how to do this using both the Hyper-V Manager and PowerShell to create the Hyper-V checkpoint.

Open Hyper-V Manager, right-click on a virtual machine running on a Hyper-V host and select Checkpoint.

What are Hyper-V Checkpoints

Creating a Hyper-V checkpoint in Hyper-V Manager
The checkpoint creation process begins. You will see the Creating Checkpoint with a percentage displayed under the Status column.

What are Hyper-V Checkpoints
After the checkpoint is created, you will see the checkpoint appear under the Checkpoints window pane in the Hyper-V Manager when the focus is on the virtual machine running on a Hyper-V host.

What are Hyper-V Checkpoints

Checkpoint is successfully created and listed on the Hyper-V virtual machine
The process to create a Hyper-V virtual machine checkpoint using PowerShell is very easy as well. This is a simple one-liner PowerShell command. Also worthy of note, using PowerShell you can dictate the name of the snapshot, whereas in Hyper-V Manager you cannot.

The PowerShell code to create a new Hyper-V checkpoint is as follows:

Checkpoint-VM –Name < VM Name > –SnapshotName ‘Testing Snapshot’

What are Hyper-V Checkpoints

Creating a Hyper-V checkpoint on a VM using PowerShell
Below, you can see the two snapshots that have been created so far using the Hyper-V checkpoint functionality. The first was created using Hyper-V Manager. The second was created with PowerShell.

What are Hyper-V Checkpoints

Hyper-V Checkpoints on a virtual machine using both Hyper-V Manager and PowerShell
Wrapping Up

In this first part of the Hyper-V checkpoints series, we have looked at the basics of what Hyper-V checkpoints are, why they are used, and how to use them with either the Hyper-V Manager or PowerShell.

In part two of looking at Hyper-V checkpoints we will take a look at the following topics:

  • Types of Hyper-V Checkpoints
  • How do Hyper-V Checkpoints Work?
  • Managing Hyper-V Checkpoints
  • Differences between Hyper-V Checkpoints and Hyper-V Backup