Question 1: Incorrect
Your company is building a music sharing
platform on which users can upload their songs. As a solutions architect for
the platform, you have designed an architecture that will leverage a Network
Load Balancer linked to an Auto Scaling Group across multiple availability
zones. The songs live on an FTP that your EC2 instances can easily access. You
are currently running with 4 EC2 instances in your ASG, but when a very popular
song is released, your Auto Scaling Group scales to 100 instances and you start
to incur high network and compute fees. You are looking to find a way to
dramatically decrease the costs without changing any of the application code,
what do you recommend?
·
Move the songs to Glacier
·
Move the songs to S3
(Incorrect)
·
Use a CloudFront distribution
(Correct)
·
Leverage Storage Gateway
Explanation
AWS Storage Gateway is a hybrid cloud storage
service that gives you on-premises access to virtually unlimited cloud storage.
The service provides three different types of gateways - Tape Gateway, File
Gateway, and Volume Gateway - that seamlessly connect on-premises
applications to cloud storage, caching data locally for low-latency
access.
Storage Gateway cannot be used for distributing files to end users, so
this option is ruled out.
Amazon CloudFront is a fast content delivery
network (CDN) service that securely delivers data, videos, applications,
and APIs to customers globally with low latency, high transfer speeds, all
within a developer-friendly environment. CloudFront points of presence
(POPs) (edge locations) make sure that popular content can be served quickly to
your viewers. CloudFront also has regional edge caches that bring more of
your content closer to your viewers, even when the content is not popular
enough to stay at a POP, to help improve performance for that content. Regional
edge caches help with all types of content, particularly content that tends to
become less popular over time. Examples include user-generated content, such as
video, photos, or artwork; e-commerce assets such as product photos and videos;
and news and event-related content that might suddenly find new popularity.
CloudFront is the right answer, because we can
put it in front of our ASG and leverage a Global Caching feature that will help
us distribute the content reliably with dramatically reduced costs (the ASG
won't need to scale as much).
Amazon Simple Storage Service (Amazon S3) is an
object storage service that offers industry-leading scalability, data
availability, security, and performance.
Using S3 would imply changing the application
code, so this option is ruled out.
Amazon S3 Glacier and S3 Glacier Deep Archive
are secure, durable, and extremely low-cost Amazon S3 cloud storage classes for
data archiving and long-term backup. They are designed to deliver 99.999999999%
durability, and provide comprehensive security and compliance capabilities that
can help meet even the most stringent regulatory requirements.
Glacier is not applicable as the files are
frequently requested (Glacier has retrieval times ranging from a few minutes to
hours), so this option is also ruled out.
References: https://aws.amazon.com/cloudfront/
https://aws.amazon.com/storagegateway/
Question 2: Incorrect
Your company has grown from
a small startup to now being a leading tech company employing over 1000 people.
As part of the scaling of your AWS team, you have observed some strange
behavior with S3 buckets settings regularly being changed. How can you figure
out what's happening without restricting the rights of your users?
·
Implement a bucket policy requiring MFA for any operations
·
Implement an IAM policy to forbid users to
change S3 bucket settings
(Incorrect)
·
Use S3 access logs and analyze them using Athena
·
Use CloudTrail to analyze API calls
(Correct)
Explanation
You manage access in AWS by
creating policies and attaching them to IAM identities (users, groups of users,
or roles) or AWS resources. A policy is an object in AWS that, when associated
with an identity or resource, defines their permissions. AWS evaluates these
policies when an IAM principal (user or role) makes a request. Permissions in
the policies determine whether the request is allowed or denied. Most policies
are stored in AWS as JSON documents. AWS supports six types of policies:
identity-based policies, resource-based policies, permissions boundaries,
Organizations SCPs, ACLs, and session policies.
Implementing an IAM policy
to forbid users would be disruptive and wouldn't go unnoticed.
S3 server access logging
provides detailed records for the requests that are made to a bucket. Server
access logs are useful for many applications. For example, access log
information can be useful in security and access audits. It can also help you
learn about your customer base and understand your Amazon S3 bill.
S3 access logs would not
provide us the necessary information, so it's not the correct choice for this
use-case.
Amazon S3 supports
MFA-protected API access, a feature that can enforce multi-factor
authentication (MFA) for access to your Amazon S3 resources. Multi-factor
authentication provides an extra level of security that you can apply to your
AWS environment. It is a security feature that requires users to prove physical
possession of an MFA device by providing a valid MFA code.
Changing the bucket policy
to require MFA would not go unnoticed.
AWS CloudTrail is a service
that enables governance, compliance, operational auditing, and risk auditing of
your AWS account. With CloudTrail, you can log, continuously monitor, and
retain account activity related to actions across your AWS infrastructure.
CloudTrail provides event history of your AWS account activity, including
actions taken through the AWS Management Console, AWS SDKs, command line tools,
and other AWS services.
Here, and in general, to
analyze any API calls made within your AWS account, you should use CloudTrail.
References:
https://aws.amazon.com/cloudtrail/
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-7
Question 3: Correct
You have recently created a
new department in your company. Being the solutions architect, you have created
a custom VPC to isolate the resources created in this new department. You have
set up the public subnet and internet gateway (IGW). You are still not able to
ping to the EC2 instance with Elastic IP launched in this VPC. What should you
do? (select two)
·
Create a secondary IGW to attach with public subnet and move the current
IGW to private and write route tables
·
Check if the security groups allows ping from
the source
(Correct)
·
Disable Source / Destination check on the EC2 instance
·
Check if the route table is configured with IGW
(Correct)
·
Contact AWS support to map your VPC with subnet
Explanation
An internet gateway (IGW)
is a horizontally scaled, redundant, and highly available VPC component that
allows communication between instances in your VPC and the internet. An
internet gateway serves two purposes: to provide a target in your VPC route
tables for internet-routable traffic, and to perform network address
translation (NAT) for instances that have been assigned public IPv4 addresses.
An internet gateway supports IPv4 and IPv6 traffic. It does not cause
availability risks or bandwidth constraints on your network traffic. To enable
access to or from the internet for instances in a subnet in a VPC, you must do
the following: Attach an internet gateway to your VPC. Add a route to your
subnet's route table that directs internet-bound traffic to the internet
gateway. Ensure that instances in your subnet have a globally unique IP address
Ensure that your network access control lists and security group rules allow
the relevant traffic to flow to and from your instance.
A security group acts as a
virtual firewall that controls the traffic for one or more instances. When you
launch an instance, you can specify one or more security groups; otherwise, we
use the default security group. You can add rules to each security group that
allow traffic to or from its associated instances. You can modify the rules for
a security group at any time; the new rules are automatically applied to all
instances that are associated with the security group. When we decide whether
to allow traffic to reach an instance, we evaluate all the rules from all the security
groups that are associated with the instance. The following are the
characteristics of security group rules: By default, security groups allow all
outbound traffic. Security group rules are always permissive; you can't create
rules that deny access. Security groups are stateful
A route table contains a
set of rules, called routes, that are used to determine where network traffic
from your subnet or gateway is directed.
After creating an IGW, make
sure the route tables are updated. Additionally, ensure the security group
allows the ICMP protocol for ping requests.
The Source/Destination
Check attribute controls whether source/destination checking is enabled on the
instance. Disabling this attribute enables an instance to handle network
traffic that isn't specifically destined for the instance. For example,
instances running services such as network address translation, routing, or a
firewall should set this value to disabled. The default value is enabled.
Source/Destination Check is
not relevant to the question and it has been added as a distractor.
There is no such thing as a
secondary IGW and the option "Create a secondary IGW to attach with public
subnet and move the current IGW to private and write route tables" has
been added as a distractor.
You cannot contact AWS
support to map your VPC with subnet.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#change_source_dest_check
Question 4: Incorrect
One of the fastest growing
rideshare companies in the United States uses AWS Cloud for its IT
infrastructure. The rideshare service is available in more than 200 cities
facilitating millions of rides per month. The company uses AWS to move faster
and manage its exponential growth, leveraging AWS products to support more than
100 microservices that enhance every element of its customers’ experience. The
company wants to improve the ride-tracking system that stores GPS coordinates
for all rides. The engineering team at the company is looking for a database
that has single digit latency, can scale horizontally and is serverless, so
that they can perform high frequency lookups reliably. As a Solutions
Architect, what database do you recommend to them?
·
Neptune
·
DynamoDB
(Correct)
·
ElastiCache
·
RDS
(Incorrect)
Explanation
Amazon DynamoDB is a
key-value and document database that delivers single-digit millisecond
performance at any scale. It's a fully managed, multiregion, multimaster,
durable database with built-in security, backup and restore, and in-memory caching
for internet-scale applications. DynamoDB can handle more than 10 trillion
requests per day and can support peaks of more than 20 million requests per
second.
DynamoDB is serverless, has
single digit latency and scales horizontally.
Amazon ElastiCache allows
you to seamlessly set up, run, and scale popular open-Source compatible
in-memory data stores in the cloud. Build data-intensive apps or boost the
performance of your existing databases by retrieving data from high throughput
and low latency in-memory data stores. Amazon ElastiCache is a popular choice
for real-time use cases like Caching, Session Stores, Gaming, Geospatial
Services, Real-Time Analytics, and Queuing.
Amazon Relational Database
Service (Amazon RDS) makes it easy to set up, operate, and scale a relational
database in the cloud. It provides cost-efficient and resizable capacity while
automating time-consuming administration tasks such as hardware provisioning,
database setup, patching and backups.
Amazon Neptune is a fast,
reliable, fully managed graph database service that makes it easy to build and
run applications that work with highly connected datasets. The core of Amazon
Neptune is a purpose-built, high-performance graph database engine optimized
for storing billions of relationships and querying the graph with milliseconds
latency.
ElastiCache / RDS / Neptune
are not serverless databases.
References:
https://aws.amazon.com/dynamodb/ https://aws.amazon.com/elasticache/
https://aws.amazon.com/neptune/
Question 5: Incorrect
Your company is deploying a
website running on Elastic Beanstalk. That website takes over 45 minutes to
install and contains both static and dynamic files that must be generated
during the installation process. As a Solutions Architect, you would like to
bring the time to create a new Instance in your Elastic Beanstalk deployment to
being less than 2 minutes. What do you recommend? (select two)
·
Use EC2 user data to customize the dynamic
installation parts at boot time
(Correct)
·
Use Elastic Beanstalk deployment caching feature
(Incorrect)
·
Create a Golden AMI with the static installation
components already setup
(Correct)
·
Store the installation files in S3 so they can be quickly retrieved
·
Use EC2 User Data to install the application at
boot time
(Incorrect)
Explanation
AWS Elastic Beanstalk is an
easy-to-use service for deploying and scaling web applications and services
developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar
servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your
code and Elastic Beanstalk automatically handles the deployment, from capacity
provisioning, load balancing, auto-scaling to application health monitoring. At
the same time, you retain full control over the AWS resources powering your
application and can access the underlying resources at any time.
When you create an AWS
Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to
use instead of the standard Elastic Beanstalk AMI included in your platform
version. A custom AMI can improve provisioning times when instances are
launched in your environment if you need to install a lot of software that
isn't included in the standard AMIs.
A golden AMI is an AMI that
you standardize through configuration, consistent security patching, and
hardening. It also contains agents you approve for logging, security,
performance monitoring, etc. For the given use-case, you can have the static
installation components already setup via the golden AMI.
EC2 instance user data is
the data that you specified in the form of a configuration script while
launching your instance. You can use EC2 user data to customize the dynamic
installation parts at boot time, rather than installing the application itself
at boot time.
Elastic Beanstalk
deployment caching is a made-up option. It is just added as a distractor.
References:
https://aws.amazon.com/elasticbeanstalk/ https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html
https://aws.amazon.com/blogs/awsmarketplace/announcing-the-golden-ami-pipeline/
No comments:
Post a Comment