Configure and Manage Virtual Networks AZ-104: Exam Objectives Explained
To successfully configure and manage virtual networks AZ-104, candidates must demonstrate a deep understanding of how software-defined networking operates within the Microsoft Cloud ecosystem. This objective represents one of the most significant portions of the Azure Administrator exam, as networking provides the fundamental plumbing for all other services, including virtual machines, storage, and databases. Mastery requires more than just knowing how to click through the portal; you must understand the underlying mechanics of IP address allocation, the nuances of routing between disparate environments, and the security implications of traffic isolation. This guide breaks down the core competencies needed to design scalable, secure, and highly available network topologies that meet the rigorous standards of the AZ-104 assessment.
Configure and Manage Virtual Networks AZ-104
Planning IP Addressing and Subnet Architecture
Effective subnet planning for AZ-104 begins with a comprehensive understanding of the RFC 1918 address space. When designing a Virtual Network (VNet), you must define an address space that provides sufficient headroom for growth while strictly avoiding overlaps with other connected networks. If you plan to connect a VNet to an on-premises data center or another Azure VNet, overlapping CIDR blocks will prevent successful routing. Azure reserves five IP addresses within every subnet: the first address (network address), the second (default gateway), the third and fourth (DNS mapping), and the last (broadcast address). For an exam scenario involving a /24 subnet (256 addresses), only 251 are actually usable for your resources.
Candidates must be able to calculate the required size of a subnet based on the number of projected hosts and service requirements. For instance, specific Azure services require dedicated subnets with precise names, such as the GatewaySubnet for VPN Gateways or the AzureFirewallSubnet. These subnets cannot host other resources like virtual machines. Furthermore, you should plan for segmentation based on security tiers (web, application, and database) to apply granular traffic filtering. Understanding the relationship between the VNet address space and its constituent subnets is critical, as you cannot resize a VNet if it is peered with another network without first deleting the peering connection.
Creating and Configuring Virtual Networks and Subnets
Once the architectural plan is finalized, the implementation phase involves defining the VNet properties and deploying subnets. In the context of the AZ-104 exam, you must be familiar with both the Azure Portal and Azure PowerShell or CLI methods for deployment. A key concept here is the Service Endpoint, which can be enabled at the subnet level to provide a direct, secure path to Azure PaaS services. When creating a VNet, you also specify the location (region) and the subscription, which are immutable after creation. Subnets act as the primary boundary for Network Security Groups (NSGs) and User Defined Routes (UDRs).
Scoring on the exam often depends on your ability to identify the correct sequence of operations. For example, to deploy a VPN Gateway, you must first create a VNet, then create the GatewaySubnet, and finally deploy the Gateway itself. You must also understand the role of IP Forwarding on a Network Interface (NIC), which is essential when a virtual machine is acting as a Network Virtual Appliance (NVA). If you fail to enable IP forwarding at the NIC level, the Azure fabric will drop any traffic where the source or destination IP does not match the NIC's own address, effectively breaking your custom routing logic.
Connecting Virtual Networks with VNet Peering
Configuring Regional and Global VNet Peering
Azure VNet peering AZ-104 objectives focus on the ability to link two virtual networks via the Microsoft backbone infrastructure. This connection allows resources in different VNets to communicate with low latency and high bandwidth as if they were on the same network. Regional peering connects VNets in the same Azure region, while Global VNet Peering connects VNets across different geographical regions. A critical exam concept is that peering is non-transitive. If VNet A is peered with VNet B, and VNet B is peered with VNet C, VNet A and VNet C cannot communicate through VNet B unless you implement a hub-and-spoke topology with a Network Virtual Appliance or a VPN Gateway in the middle.
When configuring peering, you must manage settings for Allow forwarded traffic and Allow gateway transit. The "Use remote gateways" option allows a spoke VNet to use a VPN gateway located in a hub VNet to reach on-premises networks. This is a common architectural pattern used to centralize hybrid connectivity. You must ensure that the "Allow virtual network access" setting is enabled on both sides of the peering relationship to permit bidirectional communication. From a billing perspective, remember that while VNet creation is free, VNet peering incurs data transfer charges for both ingress and egress traffic, which is a common consideration in architectural exam questions.
Troubleshooting Peering Connectivity and Address Conflicts
Troubleshooting peering requires a systematic approach to identifying why two resources cannot communicate. The most frequent cause of peering failure is an address space overlap. Azure will prevent the creation of a peering link if any part of the address prefixes in the two VNets intersect. If you need to add a new address range to a VNet that is already peered, you must remove the peering, update the address space, and then recreate the peering links. This disruption is a significant operational consideration that frequently appears in exam case studies.
Another common issue involves the Effective Routes table on a Virtual Machine's NIC. If a peering link is established but communication fails, you should inspect the effective routes to ensure the peered VNet's address range appears with a "VNet Peering" next hop. If the routes are present, the bottleneck is likely a Network Security Group (NSG) or an OS-level firewall (like Windows Firewall or iptables) blocking the traffic. Candidates should be comfortable using Network Watcher Connection Troubleshoot to verify if a TCP connection can be established between two peered endpoints, as this tool provides a hop-by-hop analysis of the traffic path.
Implementing Name Resolution with Azure DNS
Configuring Public and Private DNS Zones
An Azure DNS private zones exam candidate must distinguish between public DNS, used for internet-facing records, and private DNS, used for internal name resolution within a VNet. Azure DNS Private Zones provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without needing a custom DNS solution. When you create a private zone, you must link it to one or more VNets. If you enable autoregistration on the virtual network link, Azure automatically creates DNS records for the virtual machines deployed in that VNet, tracking their lifecycle by adding or removing records as VMs are created or deleted.
For the exam, understand that a single private DNS zone can be linked to multiple VNets across different regions and subscriptions. This allows for centralized name resolution in a multi-VNet environment. However, a VNet can only be linked to one private zone for autoregistration purposes. When designing these zones, you use standard DNS record types like A, AAAA, CNAME, MX, PTR, SOA, SRV, and TXT. The benefit of using Azure DNS over traditional VM-based DNS is the built-in high availability and scalability provided by the Azure fabric, eliminating the overhead of managing DNS server patches and backups.
Integrating with On-Premises DNS Using Custom DNS Servers
In many hybrid scenarios, Azure’s default DNS (168.63.129.16) is insufficient because it cannot resolve on-premises hostnames. To bridge this gap, you must configure Custom DNS servers within the VNet settings. When you specify a custom DNS server, Azure pushes that IP address to all resources in the VNet via DHCP. This is often an on-premises Domain Controller or a DNS forwarder located in Azure. The exam will likely test your knowledge of the "DNS transition" process: when you change the DNS server settings on a VNet, you must restart the VMs (or renew their DHCP leases) for the changes to take effect.
Advanced scenarios might involve the Azure DNS Private Resolver. This is a cloud-native service that allows you to query Azure DNS private zones from on-premises environments and vice versa without the complexity of managing VM-based DNS forwarders. It uses Inbound Endpoints to receive queries from on-premises and Outbound Endpoints with Forwarding Rules to send queries to on-premises DNS servers. Understanding the flow of a DNS query in a hybrid environment—from a cloud VM to a local resolver, then potentially forwarded across a VPN to an on-premises server—is essential for solving complex connectivity problems on the AZ-104 exam.
Securing Network Traffic with NSGs and Service Endpoints
Designing and Applying Network Security Group Rules
Network Security Groups (NSGs) are the primary tool for filtering network traffic to and from Azure resources. An NSG contains a list of security rules that allow or deny traffic based on 5-tuple information: source, source port, destination, destination port, and protocol. NSGs can be applied to either a subnet or an individual NIC. The best practice, and the one emphasized in the AZ-104, is to apply NSGs at the subnet level to simplify management. Rules are processed in priority order (lower numbers have higher priority). Once a match is found, no further rules are processed.
Every NSG contains default rules that cannot be deleted but can be overridden by higher-priority rules. These include rules that allow all traffic within the VNet and allow traffic from the Azure Load Balancer. A critical rule to remember is the "Deny All" rule at the end of the list (priority 65500), which blocks all incoming and outgoing traffic not explicitly allowed. When configuring rules, you should use Service Tags (like Storage, SQL, or Internet) instead of specific IP addresses to simplify rule maintenance. For example, using the VirtualNetwork service tag encompasses all address ranges defined in the VNet, its peered VNets, and connected on-premises networks.
Securing PaaS Services with Virtual Network Service Endpoints
Service Endpoints extend your virtual network private address space and the identity of your VNet to Azure PaaS services over a direct connection. This ensures that traffic from your VNet to the Azure service (such as Azure Storage or Azure SQL) always stays within the Microsoft Azure backbone network. By enabling a service endpoint for Microsoft.Storage on a subnet, you can then configure the Storage Account firewall to only allow traffic from that specific subnet. This effectively "locks down" the PaaS service, preventing any access from the public internet.
On the exam, you must distinguish between Service Endpoints and Private Link. While Service Endpoints keep traffic on the backbone, the PaaS service still maintains a public IP address. In contrast, Private Link (and Private Endpoints) assigns a private IP address from your VNet to the PaaS service, making it appear as a local resource. Service Endpoints are easier to set up but do not support access from on-premises via VPN or ExpressRoute. If a question asks how to provide secure access to a SQL database from an on-premises server using a private IP, Private Link is the correct answer; if it asks for a cost-effective way to secure a VNet's access to Storage, Service Endpoints are the solution.
Establishing Hybrid Connectivity with VPN Gateway
Deploying and Configuring Site-to-Site VPN Connections
VPN Gateway configuration AZ-104 requires knowledge of how to connect an on-premises network to an Azure VNet over the public internet using an encrypted tunnel. A Site-to-Site (S2S) VPN requires a VPN Gateway in Azure and a customer gateway device on-premises. The Azure VPN Gateway must be deployed into the specifically named GatewaySubnet. There are different SKUs available (e.g., VpnGw1, VpnGw2), each offering different performance tiers and features. For most modern implementations, Route-based VPNs are preferred over Policy-based VPNs because they support features like IKEv2, Point-to-Site connections, and BGP.
To establish a S2S connection, you must create a Local Network Gateway in Azure, which represents the on-premises VPN device's public IP and its local address prefixes. The "Connection" resource then links the Virtual Network Gateway and the Local Network Gateway using a shared key. If you are using Border Gateway Protocol (BGP), Azure can dynamically learn on-premises routes, which is vital for complex networks with multiple subnets. This eliminates the need to manually update the Local Network Gateway every time a new subnet is added to the local data center. Remember that the GatewaySubnet should ideally be a /27 or /28 to accommodate future gateway scaling or additional management services.
Enabling Remote User Access with Point-to-Site VPN
Point-to-Site (P2S) VPN allows individual client computers to connect securely to an Azure VNet from any location. This is particularly useful for remote administrators. To configure P2S, you must define a Client Address Pool, which is a range of private IP addresses that will be assigned to the connecting clients. These addresses must not overlap with the VNet or on-premises ranges. The AZ-104 exam often covers authentication methods for P2S, which include Azure certificate-based authentication, RADIUS, or Microsoft Entra ID (formerly Azure AD) authentication.
If using certificates, you must generate a Root Certificate and upload the public key (.cer file) to the Azure VPN Gateway. Each client then requires a Client Certificate installed in their local certificate store. For Entra ID authentication, the OpenVPN protocol must be used, and the user must consent to the Azure VPN app. Once configured, you download the VPN client configuration package from the portal, which contains the settings required for the client machine to establish the tunnel. This "teleworker" scenario is a staple of Azure administration, and you should be prepared to identify the steps for troubleshooting a failed connection, such as expired certificates or mismatched client address pools.
Designing High-Bandwidth Connections with ExpressRoute
Comparing ExpressRoute vs Site-to-Site VPN AZ-104
When evaluating ExpressRoute vs Site-to-Site VPN AZ-104, the choice typically hinges on bandwidth, latency, and security requirements. A Site-to-Site VPN is generally faster to deploy and more cost-effective as it uses the public internet. However, its performance is subject to internet congestion. In contrast, ExpressRoute provides a dedicated, private connection to Microsoft's global network through a connectivity provider. It does not traverse the public internet, offering higher reliability, faster speeds (up to 100 Gbps with ExpressRoute Direct), and lower latencies.
For the exam, remember that ExpressRoute is suited for enterprise-grade connectivity, large data migrations, and high-volume backup/recovery. While a VPN Gateway's throughput is capped at 10 Gbps (in specific configurations), ExpressRoute scales much higher. Another key difference is the routing: ExpressRoute uses BGP exclusively to manage routes between on-premises and Azure. If a scenario requires a "predictable" network experience or must meet strict compliance standards that forbid traffic from touching the public internet, ExpressRoute is the mandatory choice. You may also see "VPN as a backup for ExpressRoute" scenarios, where a S2S VPN is configured to take over if the ExpressRoute circuit fails.
Configuring ExpressRoute Gateway and Monitoring Circuit Health
Implementing ExpressRoute involves several distinct components: the ExpressRoute Circuit, the Virtual Network Gateway (specifically an "ExpressRoute" type, not "VPN"), and the connection between them. The circuit is a logical connection between your on-premises infrastructure and Microsoft Cloud through a provider. Each circuit has a unique Service Key (s-key), which you must provide to your connectivity provider to initiate the provisioning process. Once the provider completes their side, the circuit status will change to "Provisioned."
Monitoring the health of an ExpressRoute circuit is a critical administrative task. You should be familiar with the ExpressRoute Direct and Global Reach features; Global Reach allows you to connect two on-premises networks via their ExpressRoute circuits, using the Microsoft backbone as a transit network. To monitor performance, Azure provides metrics through Azure Monitor, showing bits in/out and dropped packets. For deeper insights, Network Performance Monitor (NPM) can track the latency and loss across the circuit's peering (Private, Public, or Microsoft peering). Understanding how to interpret the "Circuit Status" and "Provider Status" is essential for pinpointing whether a connectivity issue lies with Microsoft, the provider, or the local hardware.
Frequently Asked Questions
More for this exam
Key ARM Template Concepts for AZ-104: What You Need to Know
Key ARM Template Concepts for AZ-104: What You Need to Know Mastering key ARM template concepts for AZ-104 is a critical requirement for any candidate seeking the Azure Administrator Associate...
AZ-104 Study Guide PDF: Official and Community Resources for 2024
The Ultimate Guide to Finding and Using AZ-104 Study Guide PDFs Securing the Microsoft Certified: Azure Administrator Associate credential requires a deep technical understanding of how to implement,...
How to Approach AZ-104 Case Studies: A Framework for Success
A Proven Framework for Mastering AZ-104 Case Studies Success on the Microsoft Azure Administrator exam requires more than just memorizing service definitions; it demands the ability to synthesize...