An AWS architecture uses an Application Load Balancer (ALB) in front of an Auto Scaling Group. What does the Auto Scaling Group do?
An Auto Scaling Group (ASG) maintains fleet health by launching new EC2 instances when load increases (scale out) and terminating instances when load decreases (scale in). You configure a minimum, desired, and maximum instance count. Scaling is triggered by scaling policies (e.g. CPU > 70%) or scheduled actions.
The ALB is a separate service that distributes incoming traffic across healthy instances in the ASG. Together they provide both horizontal scaling and high availability.
SSL termination can happen at the ALB, not the ASG. Multi-AZ replication is a database or S3 concept, not ASG.
2 / 5
"We use S3 Lifecycle policies to move objects to cheaper storage classes." What does transitioning an object to S3 Glacier Instant Retrieval mean?
S3 Lifecycle policies automate moving objects between storage classes to reduce cost as data ages. Key classes to know:
S3 Standard — frequent access, highest cost S3 Intelligent-Tiering — auto-moves based on access patterns S3 Standard-IA (Infrequent Access) — lower cost, retrieval fee S3 Glacier Instant Retrieval — archive, millisecond retrieval S3 Glacier Flexible — archive, minutes to hours retrieval S3 Glacier Deep Archive — cheapest, 12 hours retrieval
Lifecycle policies do NOT delete objects unless you configure an expiration action. Replication is a separate feature (Cross-Region Replication). Compression/dedup are not native S3 features.
3 / 5
Complete the sentence: "We attach an IAM role to the EC2 instance instead of storing AWS credentials in the application code — this follows the principle of _____."
"Least privilege" means granting only the permissions actually needed — nothing more. An IAM role attached to an EC2 instance provides temporary credentials automatically via the instance metadata service (IMDS). The application uses the AWS SDK which fetches these credentials without any hardcoded access keys.
This is the AWS-recommended approach because: • Credentials rotate automatically — no expired keys • No secrets stored in code, environment variables, or config files • Permissions are scoped to only what the application needs
Defence in depth = using multiple security layers. Zero trust = never trust, always verify network identity. Both are valid security concepts but not what this sentence is about.
On the SAA-C03 exam, questions about "securely providing AWS credentials to an application running on EC2" always point to IAM roles, not access keys.
4 / 5
A company needs to connect its on-premises data centre to its AWS VPC with a dedicated, private connection (not over the public internet). The correct AWS service is:
AWS Direct Connect provides a dedicated, private physical connection between your on-premises network and AWS. It doesn't go over the public internet, so it offers: • Consistent low latency • Higher bandwidth (1 Gbps, 10 Gbps, 100 Gbps) • Reduced data transfer costs vs internet egress
Key vocabulary distinctions for the exam: • AWS Site-to-Site VPN — encrypted tunnel over the public internet, quick to set up, variable latency • Direct Connect — dedicated line, weeks lead time, consistent latency — used when the question says "private", "dedicated", or "consistent bandwidth" • Transit Gateway — connects multiple VPCs and on-premises networks together (a hub), not a connectivity method itself • VPC Peering — connects two VPCs, not on-premises to AWS
Exam trigger words: "dedicated connection", "private connection", "not public internet" → always Direct Connect.
5 / 5
A solution uses Amazon SQS between a web tier and a processing tier. What problem does SQS solve here?
Amazon SQS (Simple Queue Service) is a fully managed message queue. It enables loose coupling between components — a key architectural principle tested throughout the SAA-C03 exam.
Without SQS: if the web tier sends 1,000 requests/second and the processing tier can handle 200/second, the processing tier fails.
With SQS: the web tier enqueues 1,000 messages. The processing tier polls at 200/second. Messages queue up and are processed when capacity is available. Neither tier knows or cares about the other's speed.
Exam vocabulary: • Decouple = separate so neither depends on the other's availability or speed • Enqueue / produce = add a message to the queue • Dequeue / consume / poll = take a message from the queue • Dead-letter queue (DLQ) = where failed messages go after max retries • Visibility timeout = how long a message is hidden after being read (prevents double-processing)
SQS does NOT cache results (that's ElastiCache) or route by content (that's SNS topic filtering).