Detecting a hypervisor on Windows 10 is relatively simple, but due to the simplistic nature of the currently published detection vectors it’s likely that they are also relatively simple to spoof or remove. In this article we’ll detail a few ways of detecting a hypervisors presence on Windows 10, determining if it’s a Microsoft hypervisor, and other indirect ways of determining if you’re operating in a virtualized environment.
All tests were performed on Windows 10 x64, Version 1803.
Hypervisor Library
In the Windows kernel there are various system routines that allow the system programmer to query information about the environment they’re operating in. One of which is the Windows Hypervisor library (routine prefix: Hvl/Hvi/Hvp/Hvlp). The system routine of interest in this article is HvlQueryDetailInfo which is used to query the entire state of a system that is virtualized. It calls internal routines such as HviGetHypervisorInterface which is used to determine the presence of a hypervisor, and if one is present, the function executes cpuid leaf 0x40000001 which puts a 4-byte signature into the EAX register (Hv#1). It also, calls HviIsHypervisorVendorMicrosoft which executes, if a hypervisor is present, the cpuid leaf 0x40000000 and stores the vendor name in EBX, ECX, and EDX; respectively. In the case of this function the routine returns true if the vendor name when the registers are read in succession yeilds “Microsoft Hv”.
We’re going to take a look at how one can determine in a variety of indirect ways whether the environment they’re executing in is virtualized, from both user-mode and the kernel.
Using NtQuerySystemInformation (User and Kernel)
In the latest SYSTEM_INFORMATION_CLASS provided on Version 1803, there is a system information class SystemHypervisorDetailInformation (0x9f). This class can be used to fill a buffer SYSTEM_HYPERVISOR_DETAIL_INFORMATION with information about the interfaces, features, vendor, and other implementation details. To tie this back to the introduction, this information class results in a call to HvlQueryDetailInfo. This can be incredibly useful information for operators both in user-mode and the kernel, if the hypervisor is custom you will be able to query information about it and also determine its presence which is useful for protected applications that do not want to execute in a virtualized environment.
Figure 1. Disassembly of HvlQueryDetailInfo in Windows 10 Version 1803.
Using CPUID
Anti-malware suites are beginning to use more indirect and undocumented methods of detecting hypervisors due to the ability of hypervisor developers to return false output or spoof certain information returned to the guest. One of which is taking advantage of reserved responses that are always returned unless the system is subverted. It involves using two leaf values, one valid and one invalid, and determining if the returned results are the previously reserved answer. The sample code for this method of detection is shown below.
— UmIsVirtualized
unsigned long UmIsVirtualized( void ) { unsigned int invalid_leaf = 0x13371337; unsigned int valid_leaf = 0x40000000; struct _HV_DETAILS { unsigned int Data[4]; }; _HV_DETAILS InvalidLeafResponse = { 0 }; _HV_DETAILS ValidLeafResponse = { 0 }; __cpuid( &InvalidLeafResponse, invalid_leaf ); __cpuid( &ValidLeafResponse, valid_leaf ); if( ( InvalidLeafResponse.Data[ 0 ] != ValidLeafResponse.Data[ 0 ] ) || ( InvalidLeafResponse.Data[ 1 ] != ValidLeafResponse.Data[ 1 ] ) || ( InvalidLeafResponse.Data[ 2 ] != ValidLeafResponse.Data[ 2 ] ) || ( InvalidLeafResponse.Data[ 3 ] != ValidLeafResponse.Data[ 3 ] ) ) return STATUS_HYPERV_DETECTED; return STATUS_HV_NOT_PRESENT; }
The above snippet is an example, however, it can be used to determine the presence based on default responses from the processor when the specific hypervisor function is undefined. Currently, this is one way that anti-malware suites are detecting hypervisor based malware.
Abusing TLB behavior
Every time the operating system accesses memory a buffer called the TLB is iterated, and the relevant TLB entry is searched for. The TLB is a set of recently accessed virtual and physical addresses. This means that if the virtual address accessed is present in the TLB, the physical address corresponding to that virtual address will be used to access the memory. For anyone familiar with hypervisor development or the behavior of VM-exits knows that the TLB is flushed every time a VM-exit occurs. Without a hypervisor this behavior is not present, unless a mov operation on CR4.SMEP is executed. This operation occurs with multiple vulnerable drivers that allow the setting of SMEP from user-mode via their IOCTL handlers. It’s worth noting that other mov operations on control registers can result in it, however, they’re rare operations and not worth expanding on here. If you’re interested refer to Section 4.10.4.1 in Intel 64 Software Manual Volume 3A.
Another instruction pair that can be used for detection is based on an oversight in some hypervisors. The instructions INVD (Invalidate Internal Caches) and WBINVD (Write Back Invalidate Cache). The detection is performed by accessing a page in memory. This results in the processor placing the page’s physical address into one of the TLBs. When executing WBINVD or INVD the TLB is flushed, this means that if the page previously accessed is no longer present then a TLB miss and page fault will occur. If one were to register a page fault handler prior to the execution of one of these instructions that handler could detect whether a hypervisor was in use. This particular instruction pair was used for detecting BOCHS[1].
Side Note: TLB based detection on newer AMD and Intel CPUs does not result in a VM-exit flush.
Edit 9/21/2018: It’s important to note that the TLB is not flushed on every VM-exit if VPID is enabled. Thank you @PetrBenes for the reminder.
Conclusion
Something to be expected in the future is for the (ab)use of CPU bugs and intricate implementation details of hypervisors by anti-cheat software given the rise in popularity of hypervisor use in deceiving anti-cheat software. There are many more ways to detect hypervisor use than using cpuid, checking hiberfile, registry, and system firmware information; one of which is the execution of an instruction which results in a system hang if a hypervisor is present. More research should be done on a variety of scenarios that could indicate a hypervisors presence, in the future I plan to look into instruction execution times on standard systems, and the variance of instruction execution time on vanilla versus virtualized machines.
As always, comments and feedback are welcome! Thanks for reading.