Deployment and configuration
- By Orin Thomas
- 5/26/2020
- Bare metal versus virtualized
- Windows images
- Answer files
- Windows Deployment Services
- Virtual Machine Manager
Virtual Machine Manager
For most organizations, the majority of their server deployments involve virtual machines rather than deploying to bare metal physical hardware. While tools such as the Hyper-V console and WDS are adequate for smaller environments, if you need to deploy hundreds or thousands of virtual machines each year, you need a more comprehensive set of tools than those that are included with the Windows Server operating system.
System Center Virtual Machine Manager is one tool that you can use to manage your organization’s entire virtualization infrastructure, from virtualization hosts, clusters, and VMs, to managing the entire networking and storage stack. In this section, you’ll learn about VMM templates, storage, networking, and host groups.
Virtual machine templates
A Virtual Machine Manager VM template allows you to rapidly deploy virtual machines with a consistent set of settings. A VMM VM template is an XML object that is stored with a VMM library, and it includes one or more of the following components:
Guest Operating System Profile. A guest operating system profile that includes operating system settings.
Hardware Profile. A hardware profile that includes VM hardware settings.
Virtual Hard Disk. This can be a blank hard disk or a virtual hard disk that hosts a specially prepared—sysprepped, in the case of Windows-based operating systems— version of an operating system.
You can create VM templates based on existing virtual machines deployed on a virtualization host managed by VMM, based on virtual hard disks stored in a VMM library, or by using an existing VM template.
VM templates have the following limitations:
A VM template allows you to customize IP address settings, but you can only configure a static IP address for a specific VM from a pool when deploying that VM from the template.
Application and SQL Server deployment are only used when you deploy a VM as part of a service.
When creating a template from an existing VM, ensure that the VM is a member of a workgroup and is not joined to a domain.
You should create a separate local administrator account on a VM before using it as the basis of a template. Using the built-in administrator account causes the sysprep operation to fail.
VMM storage
VMM can use local and remote storage, with local storage being storage devices that are directly attached to the server and remote storage being storage available through a storage area network. VMM can use:
File storage. VMM can use file shares that support the SMB 3.0 protocol. This protocol is supported by file shares on computers running Windows Server 2012 and later. SMB 3.0 is also supported by third-party vendors of network-attached storage (NAS) devices.
Block storage. VMM can use block-level storage devices that host LUNs (Logical Unit Numbers) for storage using either the iSCSI, Serial Attached SCSI (SAS), or Fibre Channel protocols.
VMM supports automatically discovering local and remote storage. This includes automatic discovery of:
Storage arrays
Storage pools
Storage volumes
LUNs
Disks
Virtual disks
Using VMM, you can create new storage from storage capacity discovered by VMM and assign that storage to a Hyper-V virtualization host or host cluster. You can use VMM to provision storage to Hyper-V virtualization hosts or host clusters using the following methods:
From available capacity. Allows you to create storage volumes or LUNs from an existing storage pool.
From writable snapshot of a virtual disk. VMM supports creating storage from writable snapshots of existing virtual disks.
From a clone of a virtual disk. You can provision storage by creating a copy of a virtual disk. This uses storage space less efficiently than creating storage from snapshots.
From SMB 3.0 file shares. You can provision storage from SMB 3.0 file shares.
VMM supports the creation of a thin provisioned logical unit on a storage pool. This allows you to allocate a greater amount of capacity than is currently available in the pool, and it is only possible when:
The storage array supports thin provisioning
The storage administrator has enabled thin provisioning for the storage pool
VMM 2019 supports balancing of virtual disks across cluster-shared volumes to ensure that no single cluster shared volume is over-committed.
VMM networking
A VMM logical network is a collection of network sites, VLAN information, and IP subnet information. A VMM deployment needs to have at least one logical network before you can use it to deploy VMs or services. When you add a Hyper-V based virtualization host to VMM, one of the following happens:
If the physical adapter is associated with an existing logical network, it remains associated with that network once added to VMM.
If the physical adapter is not already associated with a logical network, VMM creates a new logical network, associating it with the physical adapter’s DNS suffix.
You can create logical networks with the following properties:
One Connected Network. Choose this option when network sites that compose this network can route traffic to each other, and you can use this logical network as a single connected network. You have the additional option of allowing VM networks created on this logical network to use network virtualization.
VLAN-Based Independent Networks. The sites in this logical network are independent networks. The network sites that compose this network can route traffic to each other, though this is not required.
Private VLAN (PVLAN) Networks. Choose this option when you want network sites within the logical network to be isolated independent networks.
You create network sites after you have created a VMM logical network. You use network sites to associate IP subnets, VLANs, and PVLANs with a VMM logical network.
Logical switches
VMM logical switches store network adapter configuration settings for use with VMM-managed virtualization hosts. You configure the properties of one or more virtualization host’s network adapters by applying the logical switch configuration information.
You should perform the following tasks before creating a logical switch:
Create logical networks and define network sites.
Install the providers for any Hyper-V extensible virtual switch extensions.
Create any required native port profiles for virtual adapters that you use to define port settings for the native Hyper-V virtual switch.
When you configure a VMM logical switch, you configure the following:
Extensions
Uplinks
Virtual Ports
Extensions
You use logical switch extensions to configure how the logical switch interacts with network traffic. VMM includes the following switch extensions:
Monitoring. Allows the logical switch to monitor but not modify network traffic.
Capturing. Allows the logical switch to inspect but not modify network traffic.
Filtering. Allows the logical switch to modify, defragment, or block packets.
Forwarding. Allows the logical switch to alter the destination of network traffic based on the properties of that traffic.
Uplink port profiles
Uplink port profiles specify which set of logical networks should be associated with physical network adapters. In the event that there are multiple network adapters on a virtualization host, an uplink port profile specifies whether and how those adapters should participate in teaming. Teaming allows network adapters to aggregate bandwidth and provide redundancy for network connections.
Virtual port profiles
You use port classifications to apply configurations based on functionality. The following port profiles are available:
SR-IOV. Allows a virtual network adapter to use SR-IOV (Single Root Input Output Virtualization)
Host Management. For network adapters used to manage the virtualization host using RDP, PowerShell, or another management technology
Network Load Balancing. To be used with network adapters that participate in Microsoft Network Load Balancing
Guest Dynamic IP. Used with network adapters that require guest dynamic IP addresses, such as those provided by DHCP
Live Migration Workload. Used with network adapters that support VM live migration workloads between virtualization hosts
Medium Bandwidth. Assign to network adapters that need to support medium-bandwidth workloads
Host Cluster Workload. Assign to network adapters that are used to support host clusters
Low Bandwidth. Assign to network adapters that need to support low-bandwidth workloads
High Bandwidth. Assign to network adapters that are used to support high-bandwidth workloads
iSCSI Workload. Assign to network adapters that are used to connect to SAN resources using the iSCSI protocol
Virtual machine networks
In VMM, virtual machines connect to a VMM logical network through a VMM virtual machine network. You connect a virtual machine’s network adapter to the virtual machine network rather than the logical network. You can have VMM automatically create an associated virtual machine network when you create a logical network. If you have configured a logical network to support network virtualization, you can connect multiple VM networks to the logical network, and they will be isolated from each other. Also, you can configure virtual networks to support encryption.
You can use network virtualization to configure logical networks in such a manner that different VM tenants can utilize the same IP address space on the same virtualization host without collisions occurring. For example, tenant alpha and tenant beta use the 172.16.10.x address space when their workloads are hosted on the same virtualization host cluster. Even though tenant alpha and tenant beta have virtual machines that use the same IPv4 address, network virtualization ensures that conflicts do not occur.
When you configure network virtualization, each network adapter is assigned two IP addresses:
Customer IP address. This IP address is the one used by the customer. The customer IP address is the address visible within the VM when you run a command such as ipconfig or Get-NetIPConfiguration.
Provider IP address. This IP address is used by and is visible to VMM. It is not visible within the VM operating system.
MAC address pools
A MAC address pool gives you a pool of MAC addresses that can be assigned to virtual machine network adapters across a group of virtualization hosts. Without MAC address pools, virtual machines are assigned MAC addresses on a per-virtualization host basis. While unlikely, it is possible the same MAC address may be assigned by separate virtualization hosts in environments with a very large number of virtualization hosts. Using a central MAC address pool ensures that doesn’t happen. When creating a MAC address pool, you specify a starting and an ending MAC address range.
Static IP address pools
An IP address pool is a collection of IP addresses that, through an IP subnet, is associated with a network site. VMM can assign IP addresses from the static IP address pool to virtual machines running Windows operating systems if those virtual machines use the logical network associated with the pool. Static IP address pools can contain default gateway, DNS server, and WINS server information. Static IP address pools aren’t necessary because VMs can be assigned IP address information from DHCP servers running on the network.
Private VLANS
VLANs segment network traffic by adding tags to packets. A VLAN ID is a 12-bit number, allowing you to configure VLAN IDS between the numbers 1 and 4095. While this is more than adequate for the majority of on-premises deployments, large hosting providers often have more than 5,000 clients, so they must use an alternate method to segment network traffic. A PVLAN is an extension to VLANS that uses a secondary VLAN ID with the original VLAN ID to segment a VLAN into isolated sub networks.
You can implement VLANs and PVLANs in VMM by creating a Private VLAN logical network. Private VLAN logical networks allow you to specify the VLAN and/or PVLAN ID, as well as the IPv4 or IPv6 network.
Adding a WDS to VMM
In a scalable environment, you'll need to add additional Hyper-V host servers on a frequent basis as either a standalone server or as part of a failover cluster to increase your capacity. While it's possible to use another technology to deploy new Hyper-V host servers to bare metal, the advantage of integrating virtualization host deployment with VMM is that you can fully automate the process. The process works in the following general manner:
Discovery of the chassis occurs. This may be through providing the chassis network adapter’s MAC address to VMM.
The chassis performs a PXE boot and locates the Windows Deployment Services (WDS) server that you have integrated with VMM as a managed server role. When you integrate WDS with VMM, the WDS server hosts a VMM provider that handles PXE traffic from the bare metal chassis started using the VMM provisioning tool.
The VMM provider on the WDS server queries the VMM server to verify that the bare metal chassis is an authorized target for managed virtualization host deployment. In the event that the bare metal chassis isn't authorized, WDS attempts to deploy another operating system to the chassis. If that isn't possible, PXE deployment fails.
If the bare metal chassis is authorized, a special Windows PE (Preinstallation Environment) image is transmitted to the bare metal chassis. This special Windows PE image includes a VMM agent that manages the operating system deployment.
Depending on how you configure it, the VMM agent in the Windows PE image can run scripts to update firmware on the bare metal chassis, configure RAID volumes, and prepare local storage.
A specially prepared virtual hard disk (in either .vhdx or .vhd format) containing the virtualization host operating system is copied to the bare metal chassis from a VMM library server.
The VMM agent in the Windows PE image configures the bare metal chassis to boot from the newly placed virtual hard disk.
The bare metal chassis boots into the virtual hard disk. If necessary, the newly deployed operating system can obtain additional drivers not included in the virtual hard disk from a VMM library server.
Post-deployment customization of the newly deployed operating system occurs. This includes setting a name for the new host and joining an Active Directory Domain Services domain.
The Hyper-V server role is deployed, and the newly deployed virtualization host is connected to VMM and placed in a host group.
The PXE server needs to provide the PXE service through Windows Deployment Services. When you add the VMM agent to an existing Windows Deployment Services server, VMM only manages the deployment process if the computer making the request is designated as a new virtualization host by VMM.
To integrate the WDS server with VMM to function as the VMM PXE server, you need to use an account on the VMM server that is a member of the local Administrators group on the WDS server. PXE servers need to be on the same subnet as the bare metal chassis to which they deploy the virtualization host operating system.
VMM host groups
Host groups allow you to simplify the management of virtualization hosts by allowing you to apply the same settings across multiple hosts. VMM includes the All Hosts group by default. You can create additional host groups as required in a hierarchical structure. Child host groups inherit settings from the parent host group. However, if you move a child host group to a new host group, the child host group retains its original settings, except for any PRO configuration. When you configure changes to a parent host group, VMM provides a dialog box asking if you would like to apply the changed settings to child host groups.
You can assign network and storage to host groups. Host group networks are the networks that are assigned to the host group. These resources include IP address pools, load balances, logical networks, and MAC address pools. Host group storage allows you to allocate logical units or storage pools that are accessible to the VMM server for a specific host group.