cvwiki

AWS Wiki

Nov 5, 2022

# AWS Wiki


# AWS Services

# Compute

# Serverless Workflows

# S3

# Resource Provisioning and Deployment Automation

# Cache

# Database

# Monitoring

# User Management & Security

# Security

# Networking

# Miscellaneous

# Public vs Private vs Multi vs Hybrid Cloud

# Security Best Practices

# Establishing a site-to-site VPN connection

# Auto Scaling

# Elastic Load Balancing

# Multi-Region support

# Application Load Balancers

# Network Load Balancers

# ALB vs NLB

# Alias records

In the response to a dig or nslookup query, an alias record is listed as the record type that you specified when you created the record, such as A or AAAA.

# #sysops Scenarios

# Concepts I don’t understand:

Question: As part of the yearly AWS data cleanup, you need to delete all unused S3 buckets and their contents. The tutorialsdojo bucket, which contains several educational video files, has both the Versioning and MFA Delete features enabled. One of your Systems Engineers who has an Administrator account tried to delete an S3 bucket using the aws s3 rb s3://tutorialsdojo command. However, the operation fails even after repeated attempts. Answer: You can delete a bucket that contains objects using the AWS CLI only if the bucket does not have versioning enabled. If your bucket does not have versioning enabled, you can use the rb (remove bucket) AWS CLI command with --force parameter to remove a non-empty bucket. An IAM Administrator account can suspend Versioning on an S3 bucket but only the bucket owner can enable/suspend the MFA-Delete on the objects. You can configure lifecycle on your bucket to expire objects and request that Amazon S3 delete expired objects. You can add lifecycle configuration rules to expire all or a subset of objects with a specific key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire objects one day after creation. If your bucket has versioning enabled, you can also configure the rule to expire non-current objects. After your objects expire, Amazon S3 deletes the expired objects. If you just want to empty the bucket and not delete it, make sure you remove the lifecycle configuration rule you added to empty the bucket so that any new objects you create in the bucket will remain in the bucket.


Question: A leading energy company is trying to establish a static VPN connection between an on-premises network and their VPC in AWS. As their SysOps Administrator, you created the required virtual private gateway, customer gateway and the VPN connection, including the router configuration on the customer side. Although the VPN connection status seems okay in the console, the connection is not entirely working when you connect to an EC2 instance in their VPC from one of the on-premises virtual machines. Answer: To enable instances in your VPC to reach your customer gateway, you must configure your route table to include the routes used by your VPN connection and point them to your virtual private gateway. You can enable route propagation for your route table to automatically propagate those routes to the table for you. For static routing, the static IP prefixes that you specify for your VPN configuration are propagated to the route table when the status of the VPN connection is UP. Similarly, for dynamic routing, the BGP-advertised routes from your customer gateway are propagated to the route table when the status of the VPN connection is UP.


Question: A digital advertising company is planning to migrate its web-based data analytics application from its on-premises data center to AWS. You designed the architecture to use an Application Load Balancer and an Auto Scaling group of On-Demand EC2 Instances which are deployed on a private subnet. The instances will be fetching data analytics from various API services over the Internet every 5 minutes. For security reasons, the EC2 instances should not allow any connections initiated from the Internet. What is the most scalable and highly available solution which should be implemented? Answer: You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. To create a NAT gateway:

  1. You must specify the public subnet in which the NAT gateway should reside.
  2. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet.

Question: A document management system of a legal firm is hosted in AWS Cloud with an S3 bucket as the primary storage service. To comply with the security requirements, you are instructed to ensure that the confidential documents and files stored in AWS are secured.    Which features can be used to restrict access to data in S3? Answer: By default, all Amazon S3 resources - buckets, objects, and related subresources (for example, lifecycle configuration and website configuration) are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. you can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources. Hence, configuring the S3 bucket policy to only allow access to authorized personnel and configuring the S3 ACL on the bucket of each individual object are both correct answers.


Question: A company has a newly-hired DevOps Engineer that will assist the IT Manager in developing a fault-tolerant and highly available architecture, which is comprised of an Elastic Load Balancer and an Auto Scaling group of EC2 instances deployed on multiple AZ’s. This will be used by a forex trading application that requires WebSockets, host-based and path-based routing, and support for containerized applications.

Which of the following is the most suitable type of Elastic Load Balancer that the DevOps Engineer should recommend to the IT Manager?

Answer: Application Load Balancers support WebSockets, path-based routing, host-based routing, and support for containerized applications. Network Load Balancer are incorrect because it doesn’t support path-based and host-based routing.


Question: An organization hosts an application across multiple Amazon EC2 instances backed by an Amazon Elastic File System (Amazon EFS) file system. While monitoring the instances, the SysOps administrator noticed that the file system’s PercentIOLimit metric consistently hit 100% for 20 minutes or longer. This issue resulted in the poor performance of the application that reads and writes data into the file system. The SysOps admin needs to ensure high throughput and IOPS while accessing the file system.

What step should the SysOps administrator perform to resolve the high PercentIOLimit metric on the file system? Answer: PercentIOLimit - Shows how close a file system is to reaching the I/O limit of the General Purpose performance mode. If this metric is at 100 percent more often than not, consider moving your application to a file system using the Max I/O performance mode. If the PercentIOLimit percentage returned was at or near 100 percent for a significant amount of time during the test, your application should use the Max I/O performance mode. Otherwise, it should use the default General Purpose mode. To move to a different performance mode, migrate the data to a different file system that was created in the other performance mode. You can use datasync to transfer files between 2 EFS file systems.


Question: An IT solutions company offers a service that allows users to upload and download files when needed. The files are retrievable for one year and are stored in Amazon S3 Standard. The SysOps administrator noticed that users frequently access the files stored on the bucket for the first 30 days, and from then on, the files are rarely accessed.

The SysOps administrator needs to implement a cost-effective S3 Lifecycle policy that maintains the object availability for users.

Which action should the SysOps administrator perform to achieve the requirements?

Answer: Configure all buckets to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. You may think the answer would be to configure an S3 Lifecycle policy that moves objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) class after 30 days, but this is incorrect because moving an object S3 One Zone-Infrequent will not maintain object availability for users. Amazon S3 Standard replicates data across a minimum of three AZs to protect against the loss of one entire AZ while the Amazon S3 One Zone-IA storage class replicates data within a single AZ only.


Question: A financial company is launching an online web portal that will be hosted in an Auto Scaling group of Amazon EC2 instances across multiple Availability Zones behind an Application Load Balancer (ALB). To allow HTTP and HTTPS traffic, the SysOps Administrator configured the Network ACL and the Security Group of both the ALB and EC2 instances to allow inbound traffic on ports 80 and 443. The EC2 cluster also connects to a third-party API that provides additional information on the site. However, the online portal is still unreachable over the public internet after the deployment.

How can the Administrator fix this issue?

Answer: Allow ephemeral ports in the Network ACL by adding a new rule to allow outbound traffic on port 1024-65535.

To enable the connection to a service running on an instance, the associated network ACL must allow both inbound traffic on the port that the service is listening on as well as allow outbound traffic from ephemeral ports. When a client connects to a service, a random port from the ephemeral port range (1024-65535) becomes the client’s source port. The designated ephemeral port then becomes the destination port for return traffic from the service, so outbound traffic from the ephemeral port must be allowed in the network ACL. By default, network ACLs allow all inbound and outbound traffic. If your network ACL is more restrictive then you need to explicitly allow traffic from the ephemeral port range.


Question: A live chat application is hosted in AWS which can be embedded as a widget in any website. It uses WebSockets to provide full-duplex communication between the users. The application is hosted on an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones with an Application Load Balancer in front to balance the incoming traffic. As part of the security audit of the company, there is a requirement that the client’s IP address, latencies, request paths, and server responses are properly logged.

How can you meet the given requirement in this scenario?

Answer: Do the following

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.

Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your application load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.

When you enable access logging, you must set up a standard S3 bucket where the load balancer will store the logs. The bucket must be located in the same region as the load balancer.


Question: A leading tech consultancy firm has an AWS Virtual Private Cloud (VPC) with one public subnet and a new blockchain application that is deployed to an m3.large EC2 instance. After a month, your manager instructed you to ensure that the application can support IPv6 address.

Which of the following should you do to satisfy the requirement?

Answer:

  1. Associate an IPv6 CIDR Block with the VPC and Subnets - Associate an Amazon-provided IPv6 CIDR block with your VPC and with your subnets.
  2. Update the Route Tables - Update your route tables to route your IPv6 traffic. For a public subnet, create a route that routes all IPv6 traffic from the subnet to the Internet gateway. For a private subnet, create a route that routes all Internet-bound IPv6 traffic from the subnet to an egress-only Internet gateway.
  3. Update the Security Group Rules - Update your security group rules to includes rules for IPv6 addresses. This enables IPv6 traffic to flow to and from your instances. If you’ve created custom network ACL rules to control the flow of traffic to and from your subnet, you must include rules for IPv6 traffic.
  4. Change the instance type to m4.large - If your instance type does not support IPv6, change the instance type. If your instance type does not support IPv6, you must resize the instance to a supported instance type. In the example, the instance is an m3.large instance type, which does not support IPv6. you must resize the instance to a supported instance type, for example, m4.large.
  5. Assign IPv6 Addresses to the EC2 Instance - Assign IPv6 addresses to your instances from the IPv6 address range of your subnet.
  6. (Optional) Configure IPv6 on your Instances - If your instances was launched from an AMI that is not configured to use DHCPv6, you must manually configure your instance to recognize an IPv6 address assigned to the instance.

Take note that the EC2 instance is an m3.large instance type, which does not support IPv6. you must resize the instance to a supported instance type, for example, m4.large. Remember that configuring an IPv6 is just an optional step. If you have an existing VPC that supports IPv4 only, and resources in your subnet that are configured to use IPv4 only, you can enable IPv6 support for you VPC and resources. Your VPC can operate in dual-stack mode - your resources can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4 support for your VPC and subnets; this is the default IP addressing system for Amazon VPC and Amazon EC2.


%%%%


Question: A SysOps Administrator is managing a web application hosted in an Amazon EC2 instance. The security groups and network ACLs are configured to allow HTTP and HTTPS traffic in your instance. A manager has received a report that a customer cannot access the application. The Administrator is instructed to investigate if the traffic is reaching the instance. What is the best way to satisfy this requirement?

Answer: Use Amazon VPC Flow Logs.

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to AmazonCloudWatch Logs and Amazon S3. After you’ve created a flow log, you can retrieve and view it’s data in the chosen destination.

Flow logs can help you with a number of tasks, such as:

To ensure that the customer cannot access the application, you can use VPC flow logs. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored. Flow log data is collected outside of your network traffic path, and therefore does not affect network throughput or latency.


Question: An online stock trading application is extensively using an S3 bucket to store client data. To comply with the financial regulatory requirements, you need to generate a report on the replication and encryption status of all of the objects stored in your bucket. The report should show which type of server-side encryption is being used by each object.   

As the Systems Administrator of the company, how can you meet the above requirement with the least amount of effort?

Answer: Use S3 Inventory to generate the required report.

Amazon S3 inventory is one of the tools Amazon S3 provides to help manage your storage. You can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. You can also simplify and speed up business workflows and big data jobs using Amazon S3 inventory, which provides a scheduled alternative to the Amazon S3 synchronous List API operation.

Do not use S3 Analytics, because S3 Analytics is primarily used to analyze storage access patterns to help you decide when to transition the right data to the right storage class. It does not provide a report containing the replication and encryption status of your objects.

Do not use S3 Select, because S3 Select is only used to retrieve specific data from the contents of an object using simple SQL expressions without having to retrieve the entire object. It does not generate a detailed report, unlike S3 Inventory.


Question: A financial start-up has recently adopted a hybrid cloud infrastructure with AWS Cloud. They are planning to migrate their online payments system that supports an IPv6 address and uses an Oracle database in a RAC configuration. As the AWS Consultant, you have to make sure that the application can initiate outgoing traffic to the Internet but blocks any incoming connection from the Internet.

Which of the following options would you do to properly migrate the application to AWS?

Answer: Migrate the Oracle database to an EC2 instance. Launch the application on a separate EC2 instance and then set up an egress-only Internet gateway.

An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

An instance in your public subnet can connect to the Internet through the Internet gateway if it has a public IPv4 address or an IPv6 address. Similarly, resources on the Internet can initiate a connection to your instance using its public IPv4 address or its IPv6 address; for example, when you connect to your instance using your local computer.

IPv6 addresses are globally unique, and are therefore public by default. If you want your instance to be able to access the Internet but want to prevent resources on the Internet from initiating communication with your instance, you can use an egress-only Internet gateway. To do this, create an egress-only Internet gateway in your VPC, and then add a route to your route table that points all IPv6 traffic (::/0) or a specific range of IPv6 address to the egress-only Internet gateway. IPv6 traffic in the subnet that’s associated with the route table is routed to the egress-only Internet gateway.

Remember that a NAT device in your private subnet does not support IPv6 traffic. As an alternative, create an egress-only Internet gateway for your private subnet to enable outbound communication to the internet over IPv6 and prevent inbound communication. An egress-only Internet gateway supports IPv6 traffic only.

Take note that the application that will be migrated is using an Oracle database on a RAC configuration which is not supported by RDS.


Question: A company has several applications and workloads running on AWS that are managed by various teams. The SysOps Administrator has been instructed to configure alerts to notify the teams in the event that the resource utilization exceeded the defined threshold.

Which of the following is the MOST suitable AWS service that the Administrator should use?

Answer: AWS Budgets.


Question: A leading national bank migrated its on-premises infrastructure to AWS. The SysOps Administrator noticed that the cache hit ratio of the CloudFront web distribution is less than 15%.

Answer:


Question: A microservice application is being hosted in the ap-southeast-1 and ap-northeast-1 regions. The ap-southeast-1 region accounts for 80% of traffic, with the rest from ap-northeast-1. As part of the company’s business continuity plan, all traffic must be rerouted to the other region if one of the regions’ servers fails.

Which solution can comply with the requirement?

Answer: Set up an 80/20 weighted routing policy in the network load balancer and enable health checks.

Do not set up a failover routing policy in AWS Route 53. This routing policy does not let you control how much traffic is routed across your resources.


Question: A leading media company plans to launch a data analytics application. The SysOps Administrator designed an architecture to use On-Demand EC2 instances in an Auto Scaling group that read messages from an SQS queue. A month after, the new application has been deployed to production but the Operations team noticed that when the incoming message traffic increases, the EC2 instances fall behind and it takes too long to process the messages.

How can the SysOps Administrator configure the current cloud architecture to reduce the latency during traffic spikes?

Answer: Configure the Auto Scaling group to scale out based on the number of messages in the SQS queue.