AWS Solutions Architect Exam  >  AWS Solutions Architect Notes  >  : Associate Level  >  Case Studies: PrivateLink & Network Firewall

Case Studies: PrivateLink & Network Firewall

Case Study 1

A financial services company operates a multi-account AWS environment with a shared services VPC that hosts centralized APIs for customer data retrieval. The company has 47 application VPCs across different AWS accounts that need to consume these APIs. Security requirements mandate that all traffic between application VPCs and the shared services VPC must remain on the AWS backbone network and never traverse the public internet. The current implementation uses VPC peering, but the network team is concerned about the growing complexity of managing route tables and security group rules as the number of application VPCs increases. The solutions architect has been asked to redesign the architecture to reduce operational overhead while maintaining private connectivity and ensuring that application teams cannot directly access the underlying EC2 instances hosting the APIs-only the API endpoints themselves.

Which solution best meets these requirements with the least operational complexity?

  1. Deploy a Transit Gateway and attach all VPCs to it. Configure route tables to direct API traffic through the Transit Gateway. Implement security groups on the API servers to restrict access to known CIDR blocks from application VPCs.
  2. Create a VPC endpoint service (AWS PrivateLink) in the shared services VPC associated with a Network Load Balancer fronting the API servers. Create interface VPC endpoints in each application VPC to consume the service. Use endpoint policies to control access.
  3. Establish AWS Direct Connect connections from each application VPC to the shared services VPC. Configure private virtual interfaces and BGP routing to enable private communication between VPCs.
  4. Deploy a VPN concentrator in the shared services VPC and create site-to-site VPN connections from each application VPC. Route API traffic through the encrypted VPN tunnels and use NACLs to control access.

Answer & Explanation

Correct Answer: 2 - Create a VPC endpoint service (AWS PrivateLink) in the shared services VPC associated with a Network Load Balancer fronting the API servers

Why this is correct: AWS PrivateLink is specifically designed for this use case-providing private connectivity to services across multiple VPCs and accounts without the complexity of managing routing or exposing underlying infrastructure. By creating a VPC endpoint service backed by a Network Load Balancer, the shared services team exposes only the API endpoints, not the EC2 instances. Application teams simply create interface endpoints in their VPCs with private IP addresses, and DNS resolution automatically directs traffic privately through the AWS backbone. This eliminates the need to manage route tables across 47 VPCs, scales easily as new application VPCs are added, and enforces the principle of least privilege by exposing only the service interface. Endpoint policies provide granular access control without managing security group rules for each VPC CIDR block.

Why the other options are wrong:

  • Option 1: While Transit Gateway provides private connectivity, it requires managing route tables and still exposes the underlying network architecture. Application VPCs would have direct network-level access to the EC2 instances if security groups were misconfigured. The operational overhead of managing Transit Gateway route tables and security group rules for 47 VPCs is significant and doesn't reduce complexity compared to the current VPC peering approach.
  • Option 3: AWS Direct Connect is designed for hybrid connectivity between on-premises datacenters and AWS, not for inter-VPC communication within AWS. This solution is architecturally incorrect, prohibitively expensive, and unnecessarily complex for VPC-to-VPC communication that should remain entirely within AWS.
  • Option 4: Site-to-site VPN connections are intended for hybrid connectivity scenarios, not for native AWS VPC-to-VPC communication. Creating VPN connections for 47 VPCs introduces massive operational overhead, encryption/decryption performance penalties, VPN tunnel limits, and higher costs. This approach is fundamentally misaligned with AWS best practices for inter-VPC connectivity.

Key Insight: AWS PrivateLink is purpose-built to expose services privately across VPCs without exposing underlying infrastructure or requiring complex routing. The key trap is recognizing that while Transit Gateway enables private connectivity, it doesn't abstract away the underlying network architecture or reduce operational complexity for service consumption patterns-PrivateLink does both.

Case Study 2

A healthcare technology company operates a SaaS platform on AWS that processes protected health information (PHI). The platform runs in a single VPC with application servers distributed across three Availability Zones. The security team has identified that inbound and outbound traffic inspection is required to meet HIPAA compliance requirements, including the ability to perform deep packet inspection, intrusion detection, domain-based filtering for outbound traffic, and the blocking of specific threat signatures. Currently, the company uses security groups and network ACLs, but auditors have determined these are insufficient because they only provide stateless or basic stateful filtering without content inspection capabilities. The solution must inspect all traffic entering and leaving the VPC, including traffic between subnets within the VPC. The company wants to minimize the operational burden of managing firewall infrastructure and automatically scale inspection capacity with traffic demands.

Which solution provides the required traffic inspection capabilities while minimizing operational overhead?

  1. Deploy a fleet of EC2 instances running open-source firewall software (such as pfSense) in an Auto Scaling group behind a Network Load Balancer. Route all traffic through these instances using route tables and manage firewall rules through configuration management tools.
  2. Implement AWS Network Firewall with stateful rule groups configured for deep packet inspection, intrusion prevention, and domain filtering. Deploy Network Firewall endpoints in dedicated subnets in each Availability Zone and configure route tables to direct traffic through the firewall endpoints.
  3. Deploy AWS WAF on an Application Load Balancer positioned in front of the application servers. Configure WAF rules to inspect inbound web traffic and use AWS Shield Advanced for additional threat detection capabilities.
  4. Subscribe to third-party firewall appliances from AWS Marketplace and deploy them as Gateway Load Balancer endpoints. Configure VPC route tables to forward all traffic to the Gateway Load Balancer for inspection before routing to destinations.

Answer & Explanation

Correct Answer: 2 - Implement AWS Network Firewall with stateful rule groups configured for deep packet inspection, intrusion prevention, and domain filtering

Why this is correct: AWS Network Firewall is a managed service specifically designed for this exact use case-providing stateful, network-level traffic inspection including deep packet inspection, intrusion prevention (using Suricata-compatible rules), domain-based filtering, and threat signature detection. It automatically scales to handle traffic volume without requiring the operational overhead of managing firewall instances. By deploying Network Firewall endpoints in each Availability Zone and configuring route tables appropriately, all traffic entering, leaving, or traversing the VPC can be inspected. The service is managed by AWS (no patching, scaling, or availability management required by the customer), integrates with AWS Firewall Manager for centralized rule management, and provides detailed logging to S3, CloudWatch, and Kinesis Data Firehose for compliance auditing. This directly addresses the HIPAA compliance requirement for comprehensive traffic inspection while minimizing operational burden.

Why the other options are wrong:

  • Option 1: Deploying EC2-based firewall instances creates significant operational overhead-the company must manage instance patching, configuration, high availability, scaling policies, monitoring, and routing complexity. This directly contradicts the requirement to "minimize the operational burden of managing firewall infrastructure." While technically capable of performing the required inspection, this approach requires dedicated network security expertise and ongoing operational investment.
  • Option 3: AWS WAF operates at Layer 7 (application layer) and is designed to protect web applications from common web exploits like SQL injection and XSS. It only inspects HTTP/HTTPS traffic to resources behind an ALB, CloudFront, or API Gateway-it cannot inspect all network traffic entering and leaving the VPC, nor can it inspect traffic between subnets or non-HTTP protocols. It does not provide network-level deep packet inspection or intrusion detection, making it insufficient for the comprehensive traffic inspection requirements.
  • Option 4: While Gateway Load Balancer with third-party appliances can provide deep packet inspection capabilities, this approach introduces vendor management complexity, licensing costs, and the operational burden of managing the appliance fleet (even if auto-scaled). The requirement explicitly states the desire to "minimize the operational burden"-using a fully managed AWS service (Network Firewall) is operationally simpler than managing third-party appliances, even when distributed through GWLB.

Key Insight: The critical distinction is understanding that Network Firewall is AWS's managed service for network-level traffic inspection with stateful deep packet inspection, while WAF is layer 7 application protection and Gateway Load Balancer requires managing third-party appliances. Many candidates confuse the scope of WAF or overlook Network Firewall's managed nature when operational simplicity is a stated requirement.

Case Study 3

A global media streaming company has deployed its video transcoding service as containerized microservices running on Amazon ECS in a private subnet within their production VPC. The transcoding service needs to access proprietary machine learning models stored in Amazon S3, and the security team has mandated that this traffic must never traverse the public internet due to data sensitivity concerns. The current architecture routes S3 traffic through a NAT Gateway to reach the S3 public endpoints. The finance team has noticed that NAT Gateway data processing charges have exceeded $12,000 monthly and is requesting a cost optimization solution. The network team wants to maintain the same level of security while eliminating the NAT Gateway dependency for S3 traffic. The solution must not require changes to application code or container configurations, and DNS resolution for S3 must continue to work as currently implemented.

What is the MOST cost-effective solution that meets all these requirements?

  1. Create a VPC endpoint for S3 using the interface endpoint type, attach it to the private subnets, and modify the route tables to remove the NAT Gateway route for S3 traffic.
  2. Create a VPC endpoint for S3 using the gateway endpoint type, update the route table associated with the private subnets to route S3 traffic to the gateway endpoint, and apply an endpoint policy restricting access to the specific S3 buckets.
  3. Configure VPC peering between the production VPC and a dedicated S3 VPC where an S3 interface endpoint is deployed, then route S3 traffic through the peered connection.
  4. Deploy AWS PrivateLink connections for S3 in each Availability Zone where ECS tasks run, configure security groups to allow outbound HTTPS traffic to the PrivateLink endpoints, and remove the NAT Gateway.

Answer & Explanation

Correct Answer: 2 - Create a VPC endpoint for S3 using the gateway endpoint type, update the route table associated with the private subnets to route S3 traffic to the gateway endpoint

Why this is correct: S3 Gateway Endpoints are specifically designed for this scenario and are the most cost-effective solution because they are completely free-there are no hourly charges or data processing charges, immediately eliminating the $12,000 monthly NAT Gateway costs. Gateway endpoints for S3 work by adding a route table entry that directs S3 traffic through the AWS backbone network using prefix lists, ensuring traffic never traverses the public internet. This requires no application code changes because the ECS tasks continue using the same S3 API endpoints and DNS names-the routing change is transparent to the application. Endpoint policies can restrict access to specific buckets for additional security. This solution fully satisfies all requirements: eliminates NAT Gateway costs, maintains security by keeping traffic private, requires no application changes, and preserves existing DNS resolution behavior.

Why the other options are wrong:

  • Option 1: While S3 does support interface endpoints (powered by PrivateLink), they incur hourly endpoint charges and data processing charges per GB-typically around $0.01 per GB processed plus $0.01 per hour per AZ. For a media company with high S3 data transfer volumes, this could cost thousands of dollars monthly, making it significantly more expensive than the free gateway endpoint option. Interface endpoints are useful when you need to access S3 from on-premises over Direct Connect or VPN, but for in-VPC access, gateway endpoints are the cost-optimal choice.
  • Option 3: S3 is a regional service that doesn't reside in a customer VPC, so the concept of creating an "S3 VPC" and using VPC peering to access S3 is architecturally incorrect. VPC endpoints (either gateway or interface) are the proper mechanism to access S3 privately from a VPC. This option demonstrates a fundamental misunderstanding of how S3 service architecture works.
  • Option 4: This option confuses terminology-AWS PrivateLink is the underlying technology for interface endpoints, not a separate connection type. While the intent seems to be deploying interface endpoints, as noted in Option 1's explanation, these incur charges that make them more expensive than gateway endpoints for pure in-VPC S3 access. Additionally, the phrasing "PrivateLink connections for S3 in each AZ" is imprecise and suggests per-AZ charges, which would be even more costly.

Key Insight: The key differentiator is knowing that S3 offers both gateway and interface endpoints, but gateway endpoints are completely free while interface endpoints incur charges. When cost optimization is the primary constraint and access is purely from within the VPC (not from on-premises), gateway endpoints are always the correct choice. Many candidates select interface endpoints thinking they're "more advanced" without considering the cost implications.

Case Study 4

An e-commerce company has implemented AWS Network Firewall to inspect traffic leaving their production VPC to the internet. The firewall is configured with stateful domain-based rules to block access to known malicious domains and only allow outbound HTTPS traffic to approved third-party payment processors and shipping APIs. After deployment, the application team reports that legitimate outbound API calls are intermittently failing with timeout errors, but the same calls succeed consistently when tested from a development VPC that doesn't use Network Firewall. CloudWatch metrics show that Network Firewall is processing traffic, but no dropped packets are being logged. The firewall rule groups show the approved domains are correctly configured in the allow list. The application runs on EC2 instances in private subnets across three Availability Zones, and route tables direct outbound traffic to the Network Firewall endpoint in each AZ before routing to a NAT Gateway. VPC Flow Logs show traffic reaching the Network Firewall endpoints but some flows showing no return traffic.

What is the MOST likely cause of the intermittent connection failures?

  1. The Network Firewall stateful rule group is configured for strict rule ordering instead of default action order, causing certain legitimate traffic patterns to match against an implicit deny rule before reaching the explicit allow rules.
  2. The route table configuration is incorrect-return traffic from the internet through the NAT Gateway is not being routed back through the Network Firewall endpoints, breaking the symmetric routing requirement for stateful inspection.
  3. The Network Firewall endpoint capacity in each Availability Zone is undersized for the traffic volume, causing packet drops during traffic spikes even though CloudWatch metrics show average capacity utilization below threshold.
  4. The stateful domain-based rules are performing DNS resolution at firewall evaluation time, and DNS TTL caching is causing intermittent failures when third-party services rotate their IP addresses faster than the firewall's DNS cache refresh interval.

Answer & Explanation

Correct Answer: 2 - Return traffic from the internet through the NAT Gateway is not being routed back through the Network Firewall endpoints, breaking the symmetric routing requirement

Why this is correct: AWS Network Firewall performs stateful inspection, which requires both request and response traffic to flow through the same firewall endpoint to maintain connection state. The scenario describes traffic reaching the firewall endpoints outbound (request direction) but VPC Flow Logs showing "no return traffic" for some flows. This symptom indicates asymmetric routing-outbound traffic goes through the firewall, but return traffic from the NAT Gateway bypasses the firewall and goes directly to the EC2 instances. When the firewall sees the outbound connection initiation but never sees the return traffic, it cannot properly track the connection state. Some connections may succeed due to timing or firewall timeout windows, while others fail, causing the reported intermittency. The solution requires configuring the NAT Gateway subnet route table to send return traffic back through the Network Firewall endpoint before routing to the application subnets, ensuring symmetric routing for stateful inspection.

Why the other options are wrong:

  • Option 1: Rule ordering issues would cause consistent blocking behavior for specific traffic patterns, not intermittent failures. If traffic was matching an implicit deny rule, Network Firewall would log the dropped packets as alert logs. The scenario explicitly states "no dropped packets are being logged," which rules out rule matching problems. Additionally, the approved domains are confirmed to be correctly configured in the allow list.
  • Option 3: Network Firewall automatically scales capacity within an endpoint, and genuine capacity issues would manifest as consistent performance degradation during high traffic periods or would show elevated processing latency metrics in CloudWatch. The scenario shows intermittent failures that aren't correlated with traffic volume patterns and no indication of capacity-related CloudWatch alarms. Additionally, AWS Network Firewall abstracts capacity management-it doesn't have "undersized endpoints" that customers manually configure.
  • Option 4: While DNS resolution for domain-based rules could theoretically cause issues, AWS Network Firewall maintains DNS caching and handles IP address changes for domain rules appropriately. If DNS resolution were the issue, the failures would correlate with specific domains that recently changed IP addresses, and the problem would resolve itself after the DNS cache refresh. The scenario describes intermittent failures without this pattern, and the same calls work consistently in the development VPC, indicating a routing/network issue rather than DNS caching.

Key Insight: Stateful firewall inspection requires symmetric routing-both directions of traffic must traverse the same firewall endpoint. This is a common deployment mistake with Network Firewall, and the clue is VPC Flow Logs showing traffic reaching the firewall but missing return traffic. Many candidates focus on rule configuration or capacity when troubleshooting firewall issues, missing the fundamental routing architecture requirement.

Case Study 5

A software-as-a-service company provides a data analytics platform to enterprise customers. Each customer's data is isolated in a separate AWS account within an AWS Organization. The company's central machine learning service runs in a shared services account and needs to be accessible by all customer accounts for running analytics jobs. For compliance reasons, the machine learning service cannot be exposed to the public internet, and customer accounts must not have network-level visibility into the shared services VPC infrastructure or other customers' network traffic. The company currently has 200 customer accounts and adds approximately 15 new customers monthly. The architecture team wants to avoid the complexity of managing VPC peering connections as the customer base grows and needs a solution where customer accounts can access the service using private IP addresses without the shared services team having to modify infrastructure for each new customer. Customer accounts are not allowed to modify DNS settings for the service endpoint.

Which solution best meets these requirements?

  1. Create a VPC endpoint service (PrivateLink) in the shared services account backed by a Network Load Balancer in front of the machine learning service. Configure the endpoint service to allow principals from the AWS Organization. Customer accounts create interface VPC endpoints to the service, which are automatically approved based on organizational membership.
  2. Deploy the machine learning service in a Transit Gateway shared services model. Create a central Transit Gateway, attach the shared services VPC and all customer account VPCs, and use Transit Gateway route tables with appropriate isolation to prevent cross-customer traffic while allowing customer-to-shared-services communication.
  3. Configure AWS Resource Access Manager to share the machine learning service's Application Load Balancer with all customer accounts in the organization. Customer accounts access the shared ALB using PrivateLink connections that are automatically established through the RAM sharing mechanism.
  4. Implement VPC peering between each customer VPC and the shared services VPC with automated peering creation using AWS Lambda triggered by new account creation events. Use VPC peering security groups configured to deny inter-customer traffic while allowing customer-to-service communication.

Answer & Explanation

Correct Answer: 1 - Create a VPC endpoint service (PrivateLink) backed by a Network Load Balancer and configure the endpoint service to allow principals from the AWS Organization

Why this is correct: AWS PrivateLink is the optimal solution for this multi-tenant service exposure pattern. By creating a VPC endpoint service in the shared services account and configuring it to allow principals from the entire AWS Organization (or specific organizational units), customer accounts can create interface VPC endpoints that are automatically approved without requiring the shared services team to manually approve each connection. Each customer's interface endpoint provides a private IP address in their own VPC for accessing the service, with DNS names that resolve privately. Critically, PrivateLink provides complete network isolation-customer accounts cannot see the shared services VPC infrastructure or route to other customers' networks; they only access the service interface. The architecture scales effortlessly as new customers are added because the endpoint service configuration is done once, and each customer independently creates their interface endpoint. This eliminates peering mesh complexity while meeting all security, isolation, and operational requirements. The service provider can also use endpoint policies for additional access control if needed.

Why the other options are wrong:

  • Option 2: While Transit Gateway can provide connectivity between multiple VPCs and support route isolation, it creates network-level connectivity that exposes CIDR ranges across accounts. Customer VPCs would have potential network-level visibility (depending on route table configuration), and the architecture requires careful management of route tables and security groups to prevent cross-customer access. As the customer base grows to 200+ accounts with 15 monthly additions, managing Transit Gateway attachments and route tables becomes operationally complex. Additionally, Transit Gateway doesn't provide the service abstraction layer that PrivateLink offers-customers see network connectivity rather than a service endpoint abstraction.
  • Option 3: AWS Resource Access Manager does not support sharing Application Load Balancers across accounts. RAM supports sharing specific resource types like subnets, Transit Gateway attachments, Route 53 Resolver rules, and License Manager configurations, but not ALBs. This option reflects a fundamental misunderstanding of RAM capabilities. Even if it were possible, it wouldn't provide the network isolation that PrivateLink offers, as shared resources typically operate within a networking context that requires more visibility than the requirements allow.
  • Option 4: VPC peering creates a one-to-one network relationship between VPCs, requiring N-1 peering connections per customer VPC (where N is the total number of VPCs). With 200+ customer accounts, this creates a management nightmare even with automation. Each peering connection requires coordination of non-overlapping CIDR blocks, route table entries, and security group rules. The architecture team explicitly wants to "avoid the complexity of managing VPC peering connections as the customer base grows." While Lambda automation could handle creation, the underlying complexity of maintaining and troubleshooting hundreds of peering connections remains, and the network-level visibility still exposes more infrastructure than necessary.

Key Insight: PrivateLink is the AWS-preferred pattern for exposing services privately across many accounts in a hub-and-spoke model without creating network-level visibility or mesh connectivity complexity. The key is recognizing that when you need service-level access without network-level infrastructure exposure across many consuming accounts, PrivateLink is architecturally superior to Transit Gateway or VPC peering despite their technical capability to connect VPCs.

Case Study 6

A pharmaceutical research company operates a highly regulated environment on AWS where clinical trial data is processed. The company has implemented AWS Network Firewall with comprehensive stateful rules for traffic inspection. Compliance requirements dictate that all firewall policy changes must be tracked with detailed audit logs showing who made changes, when changes occurred, what specific rules were modified, and the approval chain. The security team needs to receive real-time alerts when firewall rules are modified, particularly when rules are deleted or when allow-list rules are added that could permit access to new external destinations. The company's incident response team requires the ability to query historical firewall configuration changes over a 7-year retention period to support regulatory audits. The solution must integrate with the company's existing SIEM system that ingests events from Amazon EventBridge.

Which combination of AWS services should be implemented to meet these requirements? (Select TWO)

  1. Enable AWS CloudTrail logging with data events for Network Firewall API calls, store CloudTrail logs in S3 with S3 Object Lock configured for compliance mode with 7-year retention, and use CloudTrail Insights to detect unusual rule modification patterns.
  2. Configure AWS Config rules to monitor Network Firewall policy resources, record configuration changes with 7-year retention, and use Amazon EventBridge to trigger SNS notifications to the security team when Network Firewall resources are modified.
  3. Enable Network Firewall alert logs to capture stateful rule actions, send alert logs to Amazon CloudWatch Logs with 7-year retention, and create CloudWatch metric filters to detect rule modification events that trigger SNS notifications.
  4. Enable AWS GuardDuty for VPC Flow Log analysis, configure GuardDuty to monitor for Network Firewall configuration changes, store findings in S3 with 7-year retention, and send findings to EventBridge for SIEM integration.
  5. Use AWS Config to continuously record Network Firewall configuration changes, deliver configuration history to S3 with a lifecycle policy for 7-year retention, and configure EventBridge rules that trigger on AWS Config configuration change events for Network Firewall resources.

Answer & Explanation

Correct Answers: 1 and 5 - Enable AWS CloudTrail logging for Network Firewall API calls, and Use AWS Config to continuously record Network Firewall configuration changes

Why these are correct: This scenario requires both API-level audit trails and configuration change tracking. CloudTrail (Option 1) captures all API calls made to Network Firewall, including who (IAM principal) made the call, when (timestamp), what actions were performed (CreateFirewallPolicy, UpdateFirewallPolicy, DeleteStatefulRuleGroup), and the source IP address. Management events for Network Firewall are captured by CloudTrail by default, and storing these logs in S3 with Object Lock ensures immutability for compliance. CloudTrail provides the "who and when" audit trail with detailed API call parameters. AWS Config (Option 5) continuously records the actual configuration state of Network Firewall resources (firewall policies, rule groups, firewall instances) and tracks configuration drift over time. Config captures what the configuration looked like before and after changes, enabling compliance teams to see exactly which rules were added, modified, or deleted. Config integrates natively with EventBridge, allowing configuration change events to trigger real-time alerts to the security team and flow to the SIEM. Together, CloudTrail and Config provide comprehensive audit coverage: CloudTrail shows the API activity and actor, while Config shows the configuration state changes with historical comparison.

Why the other options are wrong:

  • Option 2: This option is partially correct-AWS Config does monitor configuration changes and integrates with EventBridge. However, it's paired with "AWS Config rules" at the beginning, which are used for compliance evaluation rather than change tracking. The confusion here is that Config has two distinct functions: configuration recording (which is what's needed) and compliance rules (which evaluate whether resources meet desired states). The correct answer focuses specifically on Config's configuration recording capability with proper S3 storage for long-term retention, making Option 5 the more complete and accurate choice. Option 2 is incomplete because it emphasizes "Config rules" which don't directly address the audit trail requirement.
  • Option 3: Network Firewall alert logs capture traffic inspection results (allowed, dropped, or alerted packets based on stateful rules), not configuration changes to the firewall policy itself. Alert logs show the enforcement of rules against network traffic, not administrative modifications to rules. This option confuses traffic inspection logging with configuration change auditing-it would not capture when a security engineer adds or deletes a firewall rule, which is the core requirement.
  • Option 4: AWS GuardDuty analyzes VPC Flow Logs, CloudTrail logs, and DNS logs to detect threats and anomalous behavior, but it does not monitor configuration changes to Network Firewall resources. GuardDuty is a threat detection service, not a configuration management or change auditing service. It would not provide the detailed configuration history or change tracking required for regulatory compliance audits, and it doesn't record who made specific firewall rule modifications or provide before/after configuration comparisons.

Key Insight: Comprehensive audit requirements typically need both CloudTrail (API activity and identity) and AWS Config (resource configuration state and history). The trap is confusing Network Firewall alert logs (traffic inspection results) with configuration change logs, or thinking a single service provides complete audit coverage. Understanding the distinct roles of CloudTrail (who did what), Config (what the configuration state is), and service-specific logs (operational telemetry) is essential for compliance scenarios.

Case Study 7

A government agency operates a citizen services portal on AWS that must comply with strict security standards requiring all internet-bound traffic from application servers to be inspected for data exfiltration attempts and malware communication. The application consists of web servers in public subnets and application servers in private subnets across two Availability Zones. Application servers need to make outbound HTTPS calls to approved government APIs hosted on the internet and must be able to download security patches from approved repositories. The agency has implemented AWS Network Firewall in the VPC with stateful domain filtering rules. The network architecture includes an Internet Gateway attached to the VPC, and route tables in the public subnets route internet traffic (0.0.0.0/0) directly to the Internet Gateway. After implementing Network Firewall endpoints in dedicated firewall subnets, the network team modified the private subnet route tables to route internet-bound traffic through the Network Firewall endpoints. However, outbound traffic from application servers is still bypassing the firewall inspection entirely, going directly to the internet. VPC Flow Logs confirm that traffic from application servers is reaching the Internet Gateway without passing through the firewall endpoints.

What additional configuration is required to ensure all outbound traffic from the application servers is inspected by Network Firewall?

  1. Modify the route table associated with the Internet Gateway to route return traffic from the internet back through the Network Firewall endpoints before routing to the application subnets, ensuring symmetric routing for stateful inspection.
  2. Create a gateway route table and associate it with the Internet Gateway, adding routes that direct traffic from the application server subnets (source CIDR blocks) to the Network Firewall endpoints before allowing egress to the internet.
  3. Enable VPC Flow Logs on the Network Firewall endpoints with packet header inspection, and configure the firewall's stateful rule groups to explicitly block direct Internet Gateway routes using network layer filtering.
  4. Deploy NAT Gateways in the public subnets, modify the private subnet route tables to route internet traffic to the NAT Gateway instead of the Internet Gateway, and configure the public subnet route tables to route traffic from NAT Gateways through the Network Firewall endpoints before reaching the Internet Gateway.

Answer & Explanation

Correct Answer: 2 - Create a gateway route table and associate it with the Internet Gateway, adding routes that direct traffic from application server subnets to the Network Firewall endpoints

Why this is correct: The issue described is a common Network Firewall deployment mistake. Private subnets routing traffic to the Network Firewall endpoints only controls the request path, but if the Internet Gateway itself doesn't know to route return traffic (and egress traffic routing) through the firewall, traffic can bypass inspection. A gateway route table is specifically designed to control routing at the Internet Gateway level. By creating a gateway route table, associating it with the Internet Gateway, and adding routes that specify application subnet CIDR blocks should be directed to the Network Firewall endpoints, you ensure that traffic from those subnets must traverse the firewall before egressing through the Internet Gateway. This configuration creates the proper inspection path: Application Servers → Network Firewall Endpoints → Internet Gateway → Internet. Without the gateway route table, the Internet Gateway routes traffic based on the most specific route in attached subnet route tables, which in this case still had direct internet routes in public subnets, allowing bypass of the firewall.

Why the other options are wrong:

  • Option 1: While Internet Gateway route tables can handle return traffic routing in some firewall architectures, the problem described is about outbound (egress) traffic from application servers bypassing the firewall entirely on the way out, not a return traffic routing issue. The scenario explicitly states VPC Flow Logs show traffic "reaching the Internet Gateway without passing through the firewall endpoints," meaning the request path itself is incorrect. Focusing only on return traffic doesn't address the fundamental issue that outbound traffic isn't being routed through the firewall in the first place. Additionally, Internet Gateways are stateful for connections-if the outbound path is correct, return traffic follows the same path automatically.
  • Option 3: VPC Flow Logs capture metadata about network traffic (source, destination, ports, protocol, accept/reject) but cannot modify routing behavior. Flow Logs are a monitoring tool, not a traffic control mechanism. Similarly, Network Firewall's stateful rules filter and inspect traffic that reaches the firewall, but they cannot control whether traffic is routed to the firewall in the first place-that's determined by VPC route tables. This option confuses traffic monitoring and inspection with traffic routing, which are separate layers of network architecture.
  • Option 4: While deploying NAT Gateways and routing private subnet traffic through them, then routing NAT Gateway traffic through Network Firewall endpoints, is a valid architecture pattern for outbound traffic inspection, it's unnecessarily complex and costly for this scenario. The agency already has Network Firewall deployed and the infrastructure in place; they simply need to correct the routing configuration using gateway route tables. Adding NAT Gateways introduces additional hourly charges and data processing fees (approximately $0.045 per hour per NAT Gateway plus $0.045 per GB processed) when the existing architecture can work correctly with proper routing. This option works but violates the principle of solving problems with the least additional resources and complexity.

Key Insight: Gateway route tables are essential for controlling traffic routing at the Internet Gateway or Virtual Private Gateway level when implementing centralized inspection architectures. Many candidates overlook gateway route tables because they're less commonly used than subnet route tables, but they're critical for preventing firewall bypass in Network Firewall deployments. The scenario deliberately includes the detail that "route tables in the public subnets route internet traffic directly to the Internet Gateway" to hint that the IGW-level routing hasn't been properly configured.

Case Study 8

A multinational logistics company operates a real-time package tracking system on AWS with application servers distributed across VPCs in three different AWS regions (us-east-1, eu-west-1, and ap-southeast-1) to serve customers with low latency. The company needs to integrate with a third-party customs declaration service that operates in a single AWS account and region (us-east-1) within the same logistics industry consortium. The third-party service is exposed via AWS PrivateLink as a VPC endpoint service, and the logistics company has been granted permission to create endpoints to this service. The company's security policy prohibits routing traffic over the public internet for inter-regional communication. The tracking system in eu-west-1 and ap-southeast-1 needs to access the customs service with private connectivity and without introducing significant latency for customer-facing operations. The architecture must minimize data transfer costs while maintaining private connectivity across regions.

Which solution provides private connectivity to the third-party PrivateLink service from all regions while minimizing cost?

  1. Create interface VPC endpoints to the third-party PrivateLink service in each of the three regional VPCs. Configure AWS PrivateLink to automatically establish inter-region PrivateLink peering connections that route traffic privately across regions over the AWS backbone.
  2. Deploy a proxy fleet on EC2 instances in the us-east-1 VPC where the third-party service is available. Create interface VPC endpoints in us-east-1 to the third-party service. Establish inter-region VPC peering between eu-west-1 and us-east-1, and between ap-southeast-1 and us-east-1. Route traffic from remote regions through the proxy fleet to the endpoint service.
  3. Create an interface VPC endpoint to the third-party PrivateLink service in the us-east-1 VPC. Provision AWS Transit Gateway in us-east-1, eu-west-1, and ap-southeast-1, and connect them using Transit Gateway inter-region peering. Route traffic from eu-west-1 and ap-southeast-1 through the Transit Gateway peering connections to reach the endpoint in us-east-1.
  4. Create an interface VPC endpoint to the third-party PrivateLink service in the us-east-1 VPC. Establish AWS Site-to-Site VPN connections between the VPCs in eu-west-1 and ap-southeast-1 to the us-east-1 VPC using VPN transit routing. Route traffic from remote regions through the encrypted VPN tunnels to the us-east-1 endpoint.

Answer & Explanation

Correct Answer: 3 - Create an interface VPC endpoint in us-east-1 and use Transit Gateway inter-region peering to route traffic from other regions

Why this is correct: VPC endpoint services (PrivateLink) are regional resources-an endpoint service in us-east-1 can only be accessed by interface endpoints created within us-east-1. To access the service from other regions privately, you need inter-region connectivity. Transit Gateway with inter-region peering provides private, encrypted connectivity over the AWS global network backbone without traversing the public internet, meeting the security requirement. By creating the interface endpoint only in us-east-1 (where the third-party service exists) and routing traffic from eu-west-1 and ap-southeast-1 through Transit Gateway peering connections, the company avoids the impossibility of creating endpoints in regions where the service doesn't exist. Transit Gateway inter-region peering charges are $0.02 per GB for data transfer between regions, which is cost-effective compared to alternatives. The solution is operationally straightforward: create one endpoint in us-east-1, configure Transit Gateway in all three regions, establish peering, and configure appropriate route tables. This provides the required private connectivity across regions with predictable cost structure.

Why the other options are wrong:

  • Option 1: This option reflects a fundamental misunderstanding of how AWS PrivateLink works. PrivateLink endpoint services are regional resources tied to a specific region. You cannot create an interface VPC endpoint in eu-west-1 to a service that only exists in us-east-1-the endpoint service must be in the same region as the interface endpoint. Additionally, "inter-region PrivateLink peering" is not a feature that exists; PrivateLink connections are intra-region only. To access a PrivateLink service from another region, you must use inter-region connectivity solutions (Transit Gateway, VPC peering, VPN, or Direct Connect) to route traffic to the region where the service exists.
  • Option 2: While this solution would work technically-creating a proxy in us-east-1 that connects to the endpoint service, then using inter-region VPC peering to reach the proxy-it introduces unnecessary operational complexity and a single point of failure. The proxy fleet requires management (patching, scaling, monitoring, high availability across AZs), adds latency due to the additional hop, and creates a potential bottleneck. Inter-region VPC peering data transfer costs are the same as Transit Gateway peering ($0.02 per GB), so there's no cost benefit, but the operational overhead is significantly higher. This violates architectural best practices of minimizing custom components when managed service alternatives exist.
  • Option 4: AWS Site-to-Site VPN is designed for hybrid connectivity between on-premises networks and AWS, not for native inter-region VPC-to-VPC communication. While technically possible to establish VPN connections between regions, it's architecturally inappropriate, introduces encryption overhead that reduces throughput (VPN tunnels have bandwidth limitations), increases latency, and costs more than Transit Gateway peering when considering VPN hourly connection charges plus data transfer. AWS explicitly recommends Transit Gateway inter-region peering or inter-region VPC peering for connecting VPCs across regions, not VPN connections which are meant for different use cases.

Key Insight: AWS PrivateLink endpoint services are regional resources that can only be accessed within the same region. Accessing a PrivateLink service from another region requires establishing inter-region connectivity (Transit Gateway peering, VPC peering, or hybrid connectivity) to route traffic to the region where the service exists. Many candidates incorrectly assume PrivateLink automatically works across regions or that interface endpoints can be created in any region for any service, which is not how the service operates.

Case Study 9

A cybersecurity firm provides managed threat detection services to enterprise clients and hosts a centralized security monitoring platform on AWS. The platform receives security telemetry data (logs, network flow data, threat intelligence feeds) from client environments via API endpoints exposed through AWS PrivateLink. Each client has their own isolated AWS account and VPC. The firm currently operates 85 client accounts with plans to scale to 500+ clients over the next 18 months. The security platform runs in a shared services VPC with multiple microservices behind Application Load Balancers for different data ingestion endpoints (logs, metrics, alerts). The firm wants to expose these services to clients via PrivateLink but is concerned about the operational overhead of managing multiple VPC endpoint services and Network Load Balancers, as AWS PrivateLink endpoint services require Network Load Balancers, not Application Load Balancers. The architecture team wants to continue using ALBs for their Layer 7 routing capabilities, health checks, and request routing to backend services, while still exposing these through PrivateLink to clients. The solution must minimize infrastructure changes to the existing ALB-based architecture.

Which solution allows the company to expose the existing ALB-based services via PrivateLink with minimal architectural changes?

  1. Replace the Application Load Balancers with Network Load Balancers, migrate the Layer 7 routing logic to the application layer within the microservices, and create VPC endpoint services associated with the NLBs that clients can connect to via interface endpoints.
  2. Deploy a Gateway Load Balancer in front of the Application Load Balancers, configure the GWLB to forward traffic to the ALBs, create a VPC endpoint service associated with the GWLB, and have clients create interface endpoints to the GWLB service.
  3. Deploy Network Load Balancers in front of the existing Application Load Balancers using NLB target groups that point to the ALBs as targets. Create VPC endpoint services associated with the NLBs and configure clients to create interface endpoints to these services. Traffic flows from client endpoints → NLB → ALB → backend microservices.
  4. Configure AWS Global Accelerator with the Application Load Balancers as endpoints, enable PrivateLink integration for Global Accelerator, and create VPC endpoint services that automatically expose the Global Accelerator endpoints via PrivateLink to client accounts.

Answer & Explanation

Correct Answer: 3 - Deploy Network Load Balancers in front of the existing Application Load Balancers with the ALBs configured as NLB targets

Why this is correct: This solution leverages the fact that Network Load Balancers can use Application Load Balancers as targets in their target groups, creating an NLB-to-ALB chaining architecture. Since PrivateLink VPC endpoint services require NLBs as the frontend (ALBs are not supported), placing an NLB in front of the existing ALB preserves all the Layer 7 capabilities, routing logic, health checks, and microservices architecture currently built on the ALB-no application or routing changes are needed. The NLB acts as a PrivateLink-compatible entry point that forwards traffic to the ALB, which then applies its Layer 7 routing rules to distribute traffic to backend services. The company creates VPC endpoint services associated with these NLBs, and clients create interface endpoints in their VPCs that connect to these services. This architecture requires minimal changes: deploy NLBs pointing to existing ALBs, create endpoint services, and configure client connectivity. All existing ALB functionality (path-based routing, host-based routing, authentication, WAF integration) remains intact.

Why the other options are wrong:

  • Option 1: Replacing ALBs with NLBs would require significant architectural changes because NLBs operate at Layer 4 (TCP/UDP) and do not provide Layer 7 capabilities like HTTP header-based routing, path-based routing, host-based routing, or application-layer health checks. Migrating all Layer 7 routing logic to the application layer means re-engineering each microservice to handle routing decisions, implement health check endpoints at the application layer, and manage SSL termination differently. This violates the requirement to "minimize infrastructure changes to the existing ALB-based architecture" and creates substantial development and operational work. While it would enable PrivateLink, the cost in re-architecture is prohibitive.
  • Option 2: Gateway Load Balancer is specifically designed for deploying, scaling, and managing third-party virtual network appliances (firewalls, intrusion detection systems, deep packet inspection appliances) using the GWLB endpoints. It's not designed for exposing application services via PrivateLink in the way described. GWLB forwards traffic transparently to appliances for inspection and then returns it, not for routing to application load balancers. Using GWLB in front of ALBs is architecturally incorrect for this use case-GWLB is for network security appliance insertion, not for creating a PrivateLink-compatible frontend for application services.
  • Option 4: AWS Global Accelerator is designed to improve global application availability and performance by routing traffic over the AWS global network to optimal endpoints. While Global Accelerator can use ALBs as endpoints, it does not have native "PrivateLink integration" that automatically exposes services privately across accounts. Global Accelerator provides public anycast IP addresses for internet-facing applications, which directly contradicts the requirement to expose services via PrivateLink for private client connectivity. This option misunderstands both what Global Accelerator does (global traffic acceleration for internet-facing apps) and how PrivateLink works (private service exposure within AWS networks).

Key Insight: Network Load Balancers can target Application Load Balancers, enabling an architecture where NLBs provide PrivateLink compatibility while ALBs provide Layer 7 capabilities. This chaining pattern is a legitimate AWS design for exposing ALB-based services via PrivateLink. Many candidates either don't know this capability exists or incorrectly choose to replace ALBs entirely, missing the opportunity to preserve existing architecture while adding PrivateLink functionality.

Case Study 10

A financial technology startup is migrating their payment processing platform from an on-premises datacenter to AWS. The platform consists of web-facing APIs that receive payment requests and backend processing services that interact with payment gateway providers. The startup has established an AWS Direct Connect connection with a private virtual interface to enable migration. During the transition period, certain backend services will remain on-premises while the web APIs are migrated to AWS. The on-premises services need to call AWS-hosted microservices for fraud detection and transaction validation using private IP connectivity. The security team requires that all traffic between on-premises and AWS be encrypted and that on-premises systems never communicate with AWS services over public IP addresses or the internet. The startup has deployed the fraud detection service in AWS behind an Application Load Balancer in a private subnet, and they want to enable on-premises services to call this service over the Direct Connect connection using private DNS names that resolve to private IP addresses. AWS services like S3 and DynamoDB are also used and must be accessible from on-premises via the Direct Connect connection without internet egress.

Which combination of configurations enables the required private connectivity for application services and AWS services over Direct Connect? (Select TWO)

  1. Create a VPC interface endpoint (PrivateLink) for the fraud detection service backed by a Network Load Balancer in front of the Application Load Balancer, and enable the private DNS name option on the endpoint service. Advertise the VPC CIDR blocks over the Direct Connect private virtual interface to allow on-premises systems to route to the endpoint private IPs.
  2. Create VPC gateway endpoints for S3 and DynamoDB in the VPC, and advertise the AWS service prefix list routes over the Direct Connect private virtual interface BGP session using route propagation on the Virtual Private Gateway.
  3. Create VPC interface endpoints for S3 and DynamoDB in the VPC with private DNS enabled, and configure Route 53 Resolver endpoints (inbound and outbound) to enable on-premises DNS resolution of the private endpoint DNS names over the Direct Connect connection.
  4. Enable AWS PrivateLink for Direct Connect by configuring a Direct Connect gateway associated with the private virtual interface, which automatically creates PrivateLink connections for application services and AWS service endpoints accessible from on-premises.
  5. Deploy AWS Transit Gateway and associate it with the Direct Connect gateway, create VPC attachments for the application VPCs, and use Transit Gateway routing to enable on-premises access to VPC resources and AWS services via the Direct Connect connection.

Answer & Explanation

Correct Answers: 1 and 3 - Create a VPC interface endpoint (PrivateLink) for the fraud detection service with private DNS enabled and advertise VPC CIDRs over Direct Connect, and Create interface endpoints for S3 and DynamoDB with Route 53 Resolver endpoints for DNS resolution

Why these are correct: This scenario requires two distinct solutions: accessing the custom application service and accessing AWS services (S3, DynamoDB) from on-premises over Direct Connect. Option 1 correctly addresses the application service access: to expose the fraud detection service to on-premises over Direct Connect, you create a VPC endpoint service (PrivateLink) in AWS. Since the existing service uses an ALB, you place an NLB in front of it (as ALBs aren't supported by PrivateLink endpoint services). The VPC endpoint service creates interface endpoints with private IPs in the VPC. By advertising the VPC CIDR blocks over the Direct Connect private virtual interface BGP session, on-premises routers learn routes to these private IPs, enabling direct connectivity. Enabling private DNS on the endpoint service allows DNS names to resolve to the endpoint private IPs. Option 3 addresses AWS service access: S3 and DynamoDB support interface endpoints (in addition to gateway endpoints) that provide private IPs for accessing these services. Interface endpoints are accessible over Direct Connect, while gateway endpoints are not (gateway endpoints only work within the VPC). By creating interface endpoints for S3 and DynamoDB and enabling private DNS, AWS service API calls resolve to private endpoint IPs. Route 53 Resolver endpoints enable on-premises DNS queries to resolve these private DNS names by forwarding queries from on-premises to AWS-hosted DNS resolution. Together, these configurations enable comprehensive private connectivity for both custom application services and AWS managed services over Direct Connect.

Why the other options are wrong:

  • Option 2: While gateway endpoints for S3 and DynamoDB provide private access to these services within the VPC, gateway endpoints are not accessible from outside the VPC-including from on-premises over Direct Connect. Gateway endpoints work by adding routes to VPC route tables with targets pointing to the gateway endpoint, but these routes are VPC-local and cannot be advertised over BGP to on-premises networks. To access S3 and DynamoDB from on-premises via Direct Connect privately, you must use interface endpoints (which have IP addresses and are routable), not gateway endpoints. This is a common point of confusion because gateway endpoints are simpler and free for in-VPC access, but they don't extend to hybrid connectivity scenarios.
  • Option 4: This option describes capabilities that don't exist. There is no "AWS PrivateLink for Direct Connect" feature that automatically creates PrivateLink connections for services accessible from on-premises. Direct Connect provides Layer 2/3 network connectivity between on-premises and AWS, while PrivateLink is a separate service for exposing VPC-hosted services privately. These services work together architecturally (you can access PrivateLink endpoints over Direct Connect), but there's no automatic integration or configuration option that sets this up. This option reflects a misunderstanding of how these services relate to each other.
  • Option 5: Transit Gateway with Direct Connect Gateway integration does enable on-premises access to multiple VPCs and can simplify routing in multi-VPC environments. However, this option doesn't specifically address how the application service behind an ALB is exposed to on-premises (which requires creating a PrivateLink endpoint service) or how AWS services like S3 and DynamoDB are made accessible (which requires interface endpoints). Transit Gateway is a networking connectivity layer, but it doesn't automatically expose application services or AWS service endpoints-those still require the appropriate endpoint configurations described in Options 1 and 3. While Transit Gateway could be part of the overall architecture, this option alone is incomplete and doesn't solve the DNS resolution or service exposure requirements.

Key Insight: Accessing VPC resources and AWS services from on-premises over Direct Connect requires understanding the difference between gateway endpoints (VPC-local only, not accessible over Direct Connect) and interface endpoints (have private IPs, accessible over Direct Connect). For custom application services, creating a VPC endpoint service (PrivateLink) backed by NLB enables on-premises access. DNS resolution requires Route 53 Resolver endpoints to forward queries between on-premises and AWS. This is a complex hybrid architecture scenario testing knowledge of multiple services working together across the AWS/on-premises boundary.

The document Case Studies: PrivateLink & Network Firewall is a part of the AWS Solutions Architect Course AWS Solutions Architect: Associate Level.
All you need of AWS Solutions Architect at this link: AWS Solutions Architect

Top Courses for AWS Solutions Architect

Related Searches
past year papers, Viva Questions, Extra Questions, mock tests for examination, Summary, Exam, Semester Notes, video lectures, Sample Paper, Objective type Questions, Important questions, study material, practice quizzes, MCQs, Case Studies: PrivateLink & Network Firewall, Free, ppt, Case Studies: PrivateLink & Network Firewall, Case Studies: PrivateLink & Network Firewall, shortcuts and tricks, pdf , Previous Year Questions with Solutions;