The Essential AWS SAA Cheat Sheet for Key Exam Concepts
Mastering the SAA-C03 exam requires more than just a surface-level understanding of cloud computing; it demands a precise grasp of how various services interact to form resilient, cost-effective architectures. This AWS SAA cheat sheet serves as a high-level technical synthesis designed for candidates in the final stages of their preparation. By focusing on the nuances of service limits, performance trade-offs, and architectural decision-making, this guide bridges the gap between theoretical knowledge and the scenario-based questions encountered on the actual exam. Whether you are validating your understanding of VPC peering or refining your strategy for database scaling, these notes provide the technical depth necessary to navigate complex multi-tier application designs and security requirements effectively.
AWS SAA Cheat Sheet: Core Compute and Networking Services
EC2 Instance Types, Pricing Models, and Auto Scaling Triggers
Elastic Compute Cloud (EC2) remains a cornerstone of the SAA exam, requiring candidates to differentiate between instance families and purchasing options based on workload characteristics. On-Demand Instances are best for short-term, unpredictable workloads, whereas Reserved Instances (RI) and Savings Plans offer significant discounts for committed usage over one or three years. For fault-tolerant, stateless applications, Spot Instances provide the highest cost savings—up to 90%—but come with the risk of a two-minute termination notice. Understanding the EC2 Launch Template vs. Launch Configuration is vital; templates are versioned and support newer features like T2/T3 Unlimited and Capacity Reservations.
Scaling is managed via Auto Scaling Groups (ASG), which utilize scaling policies to maintain application availability. Target Tracking Scaling adjusts capacity based on a specific metric, such as average CPU utilization, while Step Scaling responds to CloudWatch alarms with incremental adjustments. It is critical to recognize the Cooldown Period, which prevents the ASG from launching or terminating additional instances before the previous scaling activity takes effect. In exam scenarios, always look for the requirement of "lowest cost" vs. "highest availability" to choose between Spot and On-Demand strategies within an ASG.
VPC Components: Subnets, Route Tables, NACLs, and Security Groups
Networking in AWS centers on the Virtual Private Cloud (VPC), a logically isolated section of the AWS Cloud. Candidates must understand the functional differences between Security Groups and Network Access Control Lists (NACLs). Security Groups are stateful, meaning return traffic is automatically allowed, and operate at the instance level. Conversely, NACLs are stateless, requiring explicit rules for both inbound and outbound traffic, and operate at the subnet level. This distinction is crucial for troubleshooting connectivity issues in multi-tier architectures.
Routing is governed by Route Tables, which direct traffic from subnets to destinations like the Internet Gateway (IGW) for public subnets or a NAT Gateway for private subnets. Remember that a NAT Gateway must reside in a public subnet and requires an Elastic IP address to function. For internal communication between VPCs, VPC Peering or Transit Gateway is used. While peering is a one-to-one connection that does not support transitive routing, Transit Gateway acts as a hub-and-spoke model, simplifying management for complex, multi-account environments. High-scoring candidates will also recall that VPC Endpoints (Interface and Gateway types) allow private connectivity to AWS services without traversing the public internet.
Load Balancer Comparison: ALB vs. NLB vs. GLB
Elastic Load Balancing (ELB) distributes incoming traffic across multiple targets. The Application Load Balancer (ALB) operates at Layer 7 (HTTP/HTTPS) and is ideal for microservices and container-based applications, supporting path-based and host-based routing. It also integrates with AWS WAF for web application security. The Network Load Balancer (NLB) operates at Layer 4 (TCP/UDP/TLS) and is designed for ultra-high performance, low latency, and the ability to handle millions of requests per second. A key feature of the NLB is its ability to provide a static IP address per Availability Zone, which is often a requirement for whitelisting in on-premises firewalls.
For third-party virtual appliances, the Gateway Load Balancer (GLB) is the primary choice, combining a transparent network gateway with load balancing capabilities. This is frequently used for scaling security appliances like firewalls or deep packet inspection tools. On the exam, the choice often hinges on the protocol: use ALB for advanced routing of web traffic and NLB for raw performance or non-HTTP protocols. Note that Cross-Zone Load Balancing is enabled by default for ALB but must be manually configured for NLB to ensure even traffic distribution across all registered targets in different AZs.
Storage and Database Service Quick Reference
S3 Storage Classes: Durability, Availability, and Cost Trade-Offs
Amazon S3 is a foundational service with 99.999999999% (11 nines) of durability. The AWS Solutions Architect quick reference for storage must prioritize the trade-off between access frequency and cost. S3 Standard is for frequently accessed data, while S3 Intelligent-Tiering automatically moves objects between tiers based on changing access patterns, making it the best choice for data with unknown or unpredictable access patterns. For long-term retention, S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive offer the lowest costs, though they require retrieval times ranging from minutes to 12 hours.
Lifecycle policies are essential for automating the transition of objects between these classes to optimize spend. For example, moving logs from S3 Standard to S3 One Zone-IA after 30 days and then to Glacier after 90 days. Be aware of the Minimum Storage Duration and Minimum Object Size for classes like IA (Infrequent Access), which can lead to unexpected costs if applied to small, short-lived files. In disaster recovery scenarios, S3 Cross-Region Replication (CRR) is the go-to feature for meeting low Recovery Point Objective (RPO) requirements across different geographic locations.
EBS Volume Types and Use Cases (gp3, io2, st1)
Elastic Block Store (EBS) provides persistent block storage for EC2 instances. The General Purpose SSD (gp3) is the default choice, offering a balance of price and performance with the ability to scale IOPS and throughput independently of storage capacity. For mission-critical applications requiring sustained high IOPS (up to 256,000 per volume), Provisioned IOPS SSD (io2 Block Express) is the standard. This is typically used for large relational databases or NoSQL environments where low latency is non-negotiable.
Hard Disk Drive (HDD) options include Throughput Optimized HDD (st1), designed for frequently accessed, throughput-intensive workloads like Big Data or Log processing, and Cold HDD (sc1) for less frequent access. Note that HDD volumes cannot be used as boot volumes. A critical exam concept is EBS Multi-Attach, which allows an io1/io2 volume to be attached to multiple Nitro-based instances in the same AZ, facilitating high-availability for cluster-aware applications. Furthermore, EBS Snapshots are incremental and stored in S3, providing a reliable backup mechanism that can be shared across accounts or regions.
Database Decision Matrix: RDS, DynamoDB, Redshift, Aurora
Choosing the right database depends on the data structure and access patterns. Amazon RDS is the managed service for relational databases (SQL), supporting engines like MySQL and PostgreSQL. For high-end relational needs, Amazon Aurora provides up to 5x the throughput of standard MySQL and features a self-healing, auto-scaling storage system. DynamoDB is the premier NoSQL choice, offering single-digit millisecond latency at any scale. It is a key component of serverless architectures and uses DynamoDB Accelerator (DAX) for even faster in-memory caching.
For analytical workloads and data warehousing, Amazon Redshift uses columnar storage and parallel processing to run complex queries on petabytes of data. If the scenario involves "real-time" session data or caching to reduce database load, Amazon ElastiCache (Redis or Memcached) is the correct answer. The SAA exam often tests the ability to distinguish between OLTP (Online Transactional Processing) via RDS and OLAP (Online Analytical Processing) via Redshift. Remember that RDS is not globally distributed by default, but Aurora Global Database can provide fast local reads across multiple regions with a typical latency of less than one second.
Security, Identity, and Compliance Essentials
IAM Policy Structure and Key Permission Best Practices
Identity and Access Management (IAM) is the gatekeeper of AWS resources. An IAM Policy is a JSON document consisting of Elements: Effect (Allow/Deny), Action (e.g., s3:GetObject), Resource (ARN), and Condition. The Principle of Least Privilege is the most important best practice, ensuring users have only the permissions necessary to perform their tasks. For cross-account access, IAM Roles are preferred over long-term credentials, as they provide temporary security tokens through the AWS Security Token Service (STS).
When managing multiple accounts, AWS Organizations allows for centralized billing and policy management. Service Control Policies (SCPs) are used to set permission guardrails at the account or Organizational Unit (OU) level. It is vital to remember that an SCP does not grant permissions; it only limits what the IAM users and roles in those accounts can do. Even if an IAM user has AdministratorAccess, if an SCP denies a specific action, the user cannot perform it. This "explicit deny" always overrides any "allow" in the evaluation logic, a concept frequently tested in complex security scenarios.
Encryption Services: KMS, CloudHSM, and Certificate Manager
Data protection involves encryption at rest and in transit. AWS Key Management Service (KMS) is a managed service that makes it easy to create and control cryptographic keys. KMS uses Customer Master Keys (CMKs) which can be AWS-managed or customer-managed. For regulatory requirements demanding dedicated hardware, AWS CloudHSM provides FIPS 140-2 Level 3 compliant hardware security modules. Understanding the difference is key: KMS is a multi-tenant, shared service, while CloudHSM provides dedicated hardware control.
For encryption in transit, AWS Certificate Manager (ACM) handles the complexity of creating, storing, and renewing SSL/TLS certificates. Certificates provided by ACM can be deployed on ALBs, CloudFront distributions, and API Gateways. If the exam mentions "end-to-end encryption," you must ensure encryption is handled not just at the load balancer, but also from the load balancer to the backend instances. Additionally, Secrets Manager is often grouped here; it allows for the secure storage and automatic rotation of sensitive credentials, such as database passwords, which integrates directly with RDS.
Monitoring and Auditing Tools: CloudTrail, Config, and GuardDuty
Visibility is provided through several distinct services. AWS CloudTrail records API calls made within an account, providing a history of "who did what, when, and from where." This is the primary tool for auditing and compliance. AWS Config provides a detailed inventory of AWS resources and a history of configuration changes. It allows you to define "Rules" (e.g., "all EBS volumes must be encrypted") and can trigger automated remediations via Lambda functions or Systems Manager Automation.
For threat detection, Amazon GuardDuty uses machine learning to monitor VPC Flow Logs, DNS logs, and CloudTrail events for malicious activity, such as crypto-mining or unusual data exfiltration patterns. Amazon Inspector focuses on automated security assessments for EC2 instances and ECR images, looking for vulnerabilities or deviations from best practices. On the exam, distinguish these by their purpose: CloudTrail for "who called the API," Config for "compliance and resource history," and GuardDuty for "intelligent threat detection."
Architectural Patterns for High Availability and Disaster Recovery
Multi-AZ vs. Read Replicas for Database Resilience
High availability in RDS is achieved through Multi-AZ Deployments. This creates a synchronous standby instance in a different Availability Zone. In the event of a failure, AWS automatically updates the DNS record to point to the standby, ensuring zero manual intervention is required for failover. This is strictly for disaster recovery and does not improve read performance, as the standby instance cannot accept traffic.
To scale read-heavy workloads, Read Replicas are used. These are asynchronous copies of the primary database. While they improve performance, they are not primarily a high-availability feature, although a Read Replica can be promoted to a standalone database if the primary fails. In the SAA-C03 essential concepts PDF materials, the distinction is clear: Multi-AZ is for durability and high availability (synchronous), while Read Replicas are for performance and scalability (asynchronous). Aurora takes this further by using a shared storage layer, making both replication and failover significantly faster than standard RDS.
Disaster Recovery Strategies: Backup & Restore to Multi-Site Active-Active
Disaster Recovery (DR) strategies are evaluated based on RTO (Recovery Time Objective) and RPO (Recovery Point Objective). The simplest and cheapest is Backup and Restore, which has the highest RTO/RPO. The Pilot Light approach maintains a minimal version of the environment (usually just the data) always running in a secondary region. Warm Standby keeps a scaled-down but functional version of the full environment running, allowing for faster failover than Pilot Light.
The most expensive and resilient strategy is Multi-Site Active-Active, where traffic is balanced across two or more regions simultaneously. This provides near-zero RTO/RPO. For the exam, you must match the strategy to the business requirement. If the question emphasizes cost-effectiveness and can tolerate hours of downtime, Backup and Restore is the answer. If the requirement is "minimal downtime" or "critical business continuity," Warm Standby or Multi-Site is required. Route 53 Health Checks and Failover Routing Policies are the technical mechanisms that enable these cross-region transitions.
Serverless Design Patterns with Lambda and API Gateway
Serverless architectures eliminate the need to manage underlying servers, shifting the focus to code and events. AWS Lambda is the core compute service, executing code in response to triggers like S3 uploads or DynamoDB updates. It has a maximum execution time of 15 minutes and scales horizontally automatically. Amazon API Gateway acts as the "front door," allowing developers to create, publish, and secure APIs. It supports throttling, caching, and versioning, which are essential for maintaining stable performance under load.
Common patterns include the Serverless Web Backend, where API Gateway triggers Lambda, which then interacts with DynamoDB. This pattern is highly scalable and cost-effective because you only pay for the execution time and requests. For long-running or complex workflows, AWS Step Functions is used to orchestrate multiple Lambda functions into a state machine. This prevents the "spaghetti code" of functions calling functions and provides a visual workflow for error handling and retries. Remember that Lambda functions in a VPC require a NAT Gateway to access the public internet, a common "gotcha" in networking scenarios.
Cost Optimization and Migration Strategies
Compute Savings: Reserved Instances vs. Savings Plans vs. Spot Instances
Cost optimization is a fundamental pillar of the AWS Well-Architected Framework. Savings Plans offer more flexibility than traditional RIs, as they apply to usage across EC2, Lambda, and Fargate regardless of instance family or region (for Compute Savings Plans). Instance Savings Plans provide higher discounts (up to 72%) but require a commitment to a specific instance family within a region.
For the exam, the strategy usually involves a tiered approach: use Reserved Instances or Savings Plans for baseline, predictable load; use Auto Scaling with On-Demand instances for the variable part of the load; and use Spot Instances for any background, batch, or non-critical processing that can handle interruptions. This combination ensures maximum availability at the lowest possible price point. Always look for keywords like "steady state" for RIs and "flexible start/end times" for Spot Instances.
Trusted Advisor Checks and Cost Explorer for Analysis
AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. It inspects your environment and makes recommendations across five categories: Cost Optimization, Performance, Security, Fault Tolerance, and Service Limits. For example, it can identify unassociated Elastic IP addresses or underutilized EBS volumes.
AWS Cost Explorer is used for visualizing and analyzing your AWS spend over time. It allows you to create custom reports and forecasts, helping identify trends and anomalies. For granular cost tracking, Cost Allocation Tags are essential; they allow you to categorize costs by department, project, or environment. In exam questions, if you need to "identify which department is spending the most," the answer involves tagging and Cost Explorer. If you need "automated recommendations for security and cost," Trusted Advisor is the tool of choice.
The 6 Rs of Cloud Migration: Rehost, Replatform, Refactor, etc.
When moving to the cloud, AWS defines several strategies known as the "6 Rs." Rehosting (Lift-and-Shift) involves moving applications to the cloud without any changes, often using AWS Application Migration Service (MGN). Replatforming (Lift-and-Reshape) involves making minor optimizations, such as moving a self-managed database to Amazon RDS. Refactoring involves re-architecting the application to be cloud-native, often using serverless or microservices.
Other strategies include Retiring (turning off useless apps), Retaining (keeping apps on-premises for now), and Repurchasing (moving to a SaaS model). For large-scale data migration, AWS Snowball Edge and Snowmobile are used when moving terabytes or petabytes of data over the internet is impractical due to bandwidth constraints. AWS DataSync is the preferred tool for online data transfer between on-premises storage and S3, EFS, or FSx, offering speeds up to 10x faster than open-source tools.
Integration and Deployment Services at a Glance
Messaging and Queuing: SQS, SNS, and EventBridge Differences
Decoupling components is vital for building resilient systems. Amazon SQS (Simple Queue Service) is a distributed message queuing service. Standard Queues offer nearly unlimited throughput and at-least-once delivery, while FIFO Queues ensure messages are processed exactly once in the order they were sent. SQS is used to "buffer" requests, preventing a spike in traffic from overwhelming downstream services.
Amazon SNS (Simple Notification Service) is a pub/sub messaging service. It pushes messages to multiple subscribers (SQS, Lambda, HTTP, Email) simultaneously. Amazon EventBridge is a serverless event bus that makes it easy to connect applications using data from your own apps, integrated SaaS apps, and AWS services. Unlike SNS, EventBridge allows for advanced filtering and routing based on the content of the event. On the exam, use SQS for "decoupling/polling" and SNS or EventBridge for "fan-out/reactive" architectures.
CI/CD Pipeline Components: CodeCommit, CodeBuild, CodeDeploy, CodePipeline
AWS provides a suite of tools for Continuous Integration and Continuous Deployment. CodeCommit is a managed source control service based on Git. CodeBuild compiles source code, runs tests, and produces software packages, scaling automatically so you don't have to manage build servers. CodeDeploy automates code deployments to EC2, Fargate, Lambda, or on-premises servers, supporting deployment strategies like Canary and Blue/Green to minimize downtime.
AWS CodePipeline is the orchestrator that automates the steps required to release software. It links the other "Code" services together into a cohesive workflow. For the SAA exam, focus on the deployment strategies: Blue/Green deployment involves creating a new environment (Green) and shifting traffic to it, allowing for an easy rollback (returning to Blue) if issues arise. This is a common requirement for "zero-downtime" updates in high-availability scenarios.
Infrastructure as Code: CloudFormation vs. Terraform (Conceptual)
AWS CloudFormation allows you to model and set up your AWS resources using JSON or YAML templates. This "Infrastructure as Code" (IaC) approach ensures consistency and repeatability across environments. Key concepts include Stacks (a collection of resources managed as a single unit), StackSets (for deploying stacks across multiple accounts and regions), and Change Sets (to preview changes before they are applied).
While Terraform is a third-party tool by HashiCorp that supports multiple cloud providers, the exam focuses on CloudFormation's integration with the AWS ecosystem. Understanding Intrinsic Functions (like !Ref or !GetAtt) and Mappings is essential for customizing templates. If a scenario asks how to quickly replicate a production environment for testing, CloudFormation is the standard answer. It prevents manual configuration errors and provides a "source of truth" for the infrastructure state.
Using Your Cheat Sheet for Effective Final Review
Creating Mental Maps from Service Lists to Scenarios
In the final days before the exam, transition from memorizing definitions to building mental maps. For every service on your last minute AWS SAA review notes, ask: "What problem does this solve?" and "What are its limitations?" For example, instead of just knowing what EFS is, map it to the scenario of "multiple EC2 instances needing shared, scalable, Linux-compliant file storage."
This exercise helps in the AWS architecture patterns cheat sheet phase, where you combine services into solutions. If a question describes a high-traffic web app with a global audience, your mental map should immediately link Route 53 (latency routing) → CloudFront (caching) → ALB (distribution) → EC2/ASG (compute). This "pattern recognition" is the fastest way to eliminate incorrect distractors on the exam, as AWS often includes services that sound plausible but don't fit the specific architectural requirement.
Practicing Recall with Blank Diagrams and Service Comparisons
Active recall is significantly more effective than passive reading. Take a blank page and try to draw the "Three-Tier Architecture" from memory, including the placement of public/private subnets, NAT Gateways, and Load Balancers. Label the security groups and NACLs. This physical act of diagramming reinforces the flow of traffic and the security boundaries required by the SAA-C03 domains.
Compare similar services side-by-side to clarify their use cases. For instance, contrast AWS Fargate (serverless containers) with ECS on EC2 (managed containers on your own instances). Fargate is the "low-overhead" choice, while EC2 provides more control over the underlying host. Use these comparisons to build a "decision tree" for the exam. If the requirement is "minimize administrative effort," choose the serverless or fully managed option (Fargate, Aurora, S3) over the self-managed one.
Integrating Cheat Sheet Points with Practice Question Explanations
When taking practice exams, use this cheat sheet to verify why the correct answer is right and why the others are wrong. Most SAA questions follow a pattern: they provide a business constraint (e.g., "minimize cost") and a technical requirement (e.g., "no data loss"). If you choose an answer that meets the technical goal but is expensive, you have failed the "Cost-Optimized Architecture" domain requirement.
Pay close attention to the scoring or assessment detail provided in practice feedback. The SAA-C03 uses a scaled score of 100–1000, with a passing score of 720. Points are often lost on "multi-select" questions where two or more services must be chosen. By integrating these cheat sheet points—such as knowing that Storage Gateway has three modes (File, Volume, Tape)—you can accurately select all necessary components for hybrid cloud scenarios. Use the cheat sheet as a living document, adding notes on any specific service limits or "gotchas" you encounter during your final mock exams.
Frequently Asked Questions
More for this exam
AWS SAA Key Services Review: The Core Services You Must Master
AWS SAA Key Services Review: Mastering the Foundational Building Blocks Success on the SAA-C03 exam requires more than a passing familiarity with the console; it demands a rigorous AWS SAA key...
AWS SAA Pass Rate 2026: What the Data Reveals About Exam Difficulty
Decoding the AWS SAA Pass Rate for 2026: A Realistic Difficulty Assessment Navigating the path to becoming an AWS Certified Solutions Architect Associate requires more than just technical aptitude;...
AWS SAA Practice Test 2026: Free & Premium Question Banks
The Ultimate Guide to AWS SAA Practice Tests for 2026 Securing the AWS Certified Solutions Architect – Associate (SAA-C03) certification requires more than a passive understanding of cloud services;...