We give our customers with the finest SAA-C03 preparation material available in the form of pdf .Amazon SAA-C03 exam questions answers are carefully analyzed and crafted with the latest exam patterns by our experts. This steadfast commitment to excellence has built unbreakable trust among countless people who aspire to advance their careers. Our learning resources are designed to help our students attain an impressive score of over 97% in the Amazon SAA-C03 exam, thanks to our effective study materials. We appreciate your time and investments, ensuring you receive the best resources. Rest assured, we leave no room for error, committed to excellence.
Friendly Support Available 24/7:
If you face issues with our Amazon SAA-C03 Exam dumps, our customer support specialists are ready to assist you promptly. Your success is our priority, we believe in quality and our customers are our 1st priority. Our team is available 24/7 to offer guidance and support for your Amazon SAA-C03 exam preparation. Feel free to reach out with any questions if you find any difficulty or confusion. We are committed to ensuring you have the necessary study materials to excel.
Verified and approved Dumps for Amazon SAA-C03:
Our team of IT experts delivers the most accurate and reliable SAA-C03 dumps for your Amazon SAA-C03 exam. All the study material is approved and verified by our team regarding Amazon SAA-C03 dumps. Our meticulously verified material, endorsed by our IT experts, ensures that you excel with distinction in the SAA-C03 exam. This top-tier resource, consisting of SAA-C03 exam questions answers, mirrors the actual exam format, facilitating effective preparation. Our committed team works tirelessly to make sure that our customers can confidently pass their exams on their first attempt, backed by the assurance that our SAA-C03 dumps are the best and have been thoroughly approved by our experts.
Amazon SAA-C03 Questions:
Embark on your certification journey with confidence as we are providing most reliable SAA-C03 dumps from Microsoft. Our commitment to your success comes with a 100% passing guarantee, ensuring that you successfully navigate your Amazon SAA-C03 exam on your initial attempt. Our dedicated team of seasoned experts has intricately designed our Amazon SAA-C03 dumps PDF to align seamlessly with the actual exam question answers. Trust our comprehensive SAA-C03 exam questions answers to be your reliable companion for acing the SAA-C03 certification.
Amazon SAA-C03 Sample Questions
Question # 1
A company is developing a mobile game that streams score updates to a backendprocessor and then posts results on a leaderboard A solutions architect needs to design asolution that can handle large traffic spikes process the mobile game updates in order ofreceipt, and store the processed updates in a highly available database The company alsowants to minimize the management overhead required to maintain the solutionWhat should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams Process the updates in KinesisData Streams with AWS Lambda Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleetof Amazon EC2 instances set up for Auto Scaling Store the processed updates in AmazonRedshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topicSubscribe an AWS Lambda function to the SNS topic to process the updates. Store theprocessed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use afleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQSqueue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Answer: A
Explanation: Amazon Kinesis Data Streams is a scalable and reliable service that can
ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes
and preserve the order of the incoming data records. AWS Lambda is a serverless
compute service that can process the data streams from Kinesis Data Streams without
requiring any infrastructure management. It can also scale automatically to match the
throughput of the data stream. Amazon DynamoDB is a fully managed, highly available,
and fast NoSQL database that can store the processed updates from Lambda. It can also
handle high write throughput and provide consistent performance. By using these services,
the solutions architect can design a solution that meets the requirements of the company
with the least operational overhead.
Question # 2
A company runs an SMB file server in its data center. The file server stores large files thatthe company frequently accesses for up to 7 days after the file creation date. After 7 days,the company needs to be able to access the files with a maximum retrieval time of 24hours.Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server toAWS. B. Create an Amazon S3 File Gateway to increase the company's storage space. Createan S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx File Gateway to increase the company's storage space. Createan Amazon S3 Lifecycle policy to transition the data after 7 days. D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy totransition the data to S3 Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3,
which appears as a network file share. It enables you to store and retrieve Amazon S3
objects through standard file storage protocols such as SMB. S3 File Gateway can also
cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a
feature that allows you to define rules that automate the management of your objects
throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different
storage classes based on their age and access patterns. S3 Glacier Deep Archive is a
storage class that offers the lowest cost for long-term data archiving, with a retrieval time of
12 hours or 48 hours. This solution will meet the requirements, as it allows the company to
store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep
Archive after 7 days for cost savings and compliance.
References: 1 provides an overview of Amazon S3 File Gateway and its benefits.
2 explains how to use S3 Lifecycle policy to manage object storage lifecycle.
3 describes the features and use cases of S3 Glacier Deep Archive storage class.
Question # 3
A company has an organization in AWS Organizations that has all features enabled Thecompany requires that all API calls and logins in any existing or new AWS account must beaudited The company needs a managed solution to prevent additional work and tominimize costs The company also needs to know when any AWS account is not compliantwith the AWS Foundational Security Best Practices (FSBP) standard.Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an AWS Control Tower environment in the Organizations management accountEnable AWS Security Hub and AWS Control Tower Account Factory in the environment. B. Deploy an AWS Control Tower environment in a dedicated Organizations memberaccount Enable AWS Security Hub and AWS Control Tower Account Factory in theenvironment. C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ. D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Answer: A
Explanation: AWS Control Tower is a fully managed service that simplifies the setup and
governance of a secure, compliant, multi-account AWS environment. It establishes a
landing zone that is based on best-practices blueprints, and it enables governance using
controls you can choose from a pre-packaged list. The landing zone is a well-architected,
multi-account baseline that follows AWS best practices. Controls implement governance
rules for security, compliance, and operations. AWS Security Hub is a service that provides
a comprehensive view of your security posture across your AWS accounts. It aggregates,
organizes, and prioritizes security alerts and findings from multiple AWS services, such as
IAM Access Analyzer, as well as from AWS Partner solutions. AWS Security Hub
continuously monitors your environment using automated compliance checks based on the
AWS best practices and industry standards, such as the AWS Foundational Security Best
Practices (FSBP) standard. AWS Control Tower Account Factory is a feature that
automates the provisioning of new AWS accounts that are preconfigured to meet your
business, security, and compliance requirements. By deploying an AWS Control Tower
environment in the Organizations management account, you can leverage the existing
organization structure and policies, and enable AWS Security Hub and AWS Control Tower
Account Factory in the environment. This way, you can audit all API calls and logins in any
existing or new AWS account, monitor the compliance status of each account with the FSBP standard, and provision new accounts with ease and consistency. This solution
meets the requirements with the least operational overhead, as you do not need to manage
any infrastructure, perform any data migration, or submit any requests for changes.
References:
AWS Control Tower
[AWS Security Hub]
[AWS Control Tower Account Factory]
Question # 4
A solutions architect is designing a user authentication solution for a company The solutionmust invoke two-factor authentication for users that log in from inconsistent geographicallocations. IP addresses, or devices. The solution must also be able to scale up toaccommodate millions of users.Which solution will meet these requirements'?
A. Configure Amazon Cognito user pools for user authentication Enable the nsk-basedadaptive authentication feature with multi-factor authentication (MFA) B. Configure Amazon Cognito identity pools for user authentication Enable multi-factorauthentication (MFA). C. Configure AWS Identity and Access Management (1AM) users for user authenticationAttach an 1AM policy that allows the AllowManageOwnUserMFA action D. Configure AWS 1AM Identity Center (AWS Single Sign-On) authentication for userauthentication Configure the permission sets to require multi-factor authentication(MFA)
Answer: A
Explanation: Amazon Cognito user pools provide a secure and scalable user directory for
user authentication and management. User pools support various authentication methods,
such as username and password, email and password, phone number and password, and
social identity providers. User pools also support multi-factor authentication (MFA), which
adds an extra layer of security by requiring users to provide a verification code or a
biometric factor in addition to their credentials. User pools can also enable risk-based
adaptive authentication, which dynamically adjusts the authentication challenge based on
the risk level of the sign-in attempt. For example, if a user tries to sign in from an unfamiliar
device or location, the user pool can require a stronger authentication factor, such as SMS
or email verification code. This feature helps to protect user accounts from unauthorized
access and reduce the friction for legitimate users. User pools can scale up to millions of
users and integrate with other AWS services, such as Amazon SNS, Amazon SES, AWS
Lambda, and AWS KMS.
Amazon Cognito identity pools provide a way to federate identities from multiple identity
providers, such as user pools, social identity providers, and corporate identity providers.
Identity pools allow users to access AWS resources with temporary, limited-privilege
credentials. Identity pools do not provide user authentication or management features,
such as MFA or adaptive authentication. Therefore, option B is not correct.
AWS Identity and Access Management (IAM) is a service that helps to manage access to
AWS resources. IAM users are entities that represent people or applications that need to
interact with AWS. IAM users can be authenticated with a password or an access key. IAM
users can also enable MFA for their own accounts, by using the
AllowManageOwnUserMFA action in an IAM policy. However, IAM users are not suitable
for user authentication for web or mobile applications, as they are intended for
administrative purposes. IAM users also do not support adaptive authentication based on
risk factors. Therefore, option C is not correct.
AWS IAM Identity Center (AWS Single Sign-On) is a service that enables users to sign in
to multiple AWS accounts and applications with a single set of credentials. AWS SSO
supports various identity sources, such as AWS SSO directory, AWS Managed Microsoft
AD, and external identity providers. AWS SSO also supports MFA for user authentication,
which can be configured in the permission sets that define the level of access for each
user. However, AWS SSO does not support adaptive authentication based on risk factors.
Therefore, option D is not correct.
References:
Amazon Cognito User Pools
Adding Multi-Factor Authentication (MFA) to a User Pool
Risk-Based Adaptive Authentication
Amazon Cognito Identity Pools
IAM Users
Enabling MFA Devices
AWS Single Sign-On
How AWS SSO Works
Question # 5
A solutions architect needs to design the architecture for an application that a vendorprovides as a Docker container image The container needs 50 GB of storage available fortemporary files The infrastructure must be serverless.Which solution meets these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function that uses the Docker container image with an AmazonS3 mounted volume that has more than 50 GB of space B. Create an AWS Lambda function that uses the Docker container image with an AmazonElastic Block Store (Amazon EBS) volume that has more than 50 GB of space C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWSFargate launch type Create a task definition for the container image with an AmazonElastic File System (Amazon EFS) volume. Create a service with that task definition. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses theAmazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume thathas more than 50 GB of space Create a task definition for the container image. Create aservice with that task definition.
Answer: C
Explanation:
The AWS Fargate launch type is a serverless way to run containers on Amazon ECS,
without having to manage any underlying infrastructure. You only pay for the resources
required to run your containers, and AWS handles the provisioning, scaling, and security of
the cluster. Amazon EFS is a fully managed, elastic, and scalable file system that can be
mounted to multiple containers, and provides high availability and durability. By using AWS
Fargate and Amazon EFS, you can run your Docker container image with 50 GB of storage available for temporary files, with the least operational overhead. This solution meets the
requirements of the question.
References:
AWS Fargate
Amazon Elastic File System
Using Amazon EFS file systems with Amazon ECS
Question # 6
A company uses AWS Organizations to run workloads within multiple AWS accounts Atagging policy adds department tags to AWS resources when the company creates tags.An accounting team needs to determine spending on Amazon EC2 consumption Theaccounting team must determine which departments are responsible for the costsregardless of AWS account The accounting team has access to AWS Cost Explorer for allAWS accounts within the organization and needs to access all reports from Cost Explorer.Which solution meets these requirements in the MOST operationally efficient way'?
A. From the Organizations management account billing console, activate a user-definedcost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2. B. From the Organizations management account billing console, activate an AWS-definedcost allocation tag named department. Create one cost report in Cost Explorer grouping bytag name, and filter by EC2. C. From the Organizations member account billing console, activate a user-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by thetag name, and filter by EC2. D. From the Organizations member account billing console, activate an AWS-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by tagname and filter by EC2.
Answer: B
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one activation of the cost allocation
tag and one creation of the cost report from the management account, which has
access to all the member accounts’ data and billing preferences.
It is consistent, as it uses the AWS-defined cost allocation tag named department,
which is automatically applied to resources when the company creates tags using
the tagging policy enforced by AWS Organizations. This ensures that the tag name
and value are the same across all the resources and accounts, and avoids any
discrepancies or errors that might arise from user-defined tags.
It is informative, as it creates one cost report in Cost Explorer grouping by the tag
name, and filters by EC2. This allows the accounting team to see the breakdown
of EC2 consumption and costs by department, regardless of the AWS account.
The team can also use other features of Cost Explorer, such as charts, filters, and
forecasts, to analyze and optimize the spending.
References:
Using AWS cost allocation tags - AWS Billing
User-defined cost allocation tags - AWS Billing
Cost Tagging and Reporting with AWS Organizations
Question # 7
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for itsworkloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetesetcd key-value store.Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key Use AWS SecretsManager to manage rotate, and store all secrets in Amazon EKS. B. Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKSKMS secrets encryption on the Amazon EKS cluster. C. Create the Amazon EKS cluster with default options Use the Amazon Elastic BlockStore (Amazon EBS) Container Storage Interface (CSI) driver as an add-on. D. Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebsalias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for theaccount.
Answer: B
Explanation: This option is the most secure and simple way to encrypt the secrets that are
stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that
allows you to create and manage encryption keys that can be used to encrypt your data.
Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to
encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an
additional layer of protection for your sensitive data, such as passwords, tokens, and keys.
You can create a new KMS key or use an existing one, and then enable the Amazon EKS
KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to
control who can access or use the KMS key.
Option A is not correct because using AWS Secrets Manager to manage, rotate, and store
all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service
that helps you securely store, retrieve, and rotate your secrets, such as database
credentials, API keys, and passwords. You can use it to manage secrets that are used by
your applications or services outside of Amazon EKS, but it is not designed to encrypt the
secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS
Secrets Manager would incur additional costs and complexity, and it would not leverage the
Option C is not correct because using the Amazon EBS Container Storage Interface (CSI)
driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The
Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as
persistent storage for your Kubernetes pods. It is useful for providing durable and scalable
storage for your applications, but it does not affect the encryption of the secrets that are
stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI
driver would require additional configuration and resources, and it would not provide the
same level of security as using a KMS key.
Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and
enabling default Amazon EBS volume encryption for the account does not encrypt the
secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used
by AWS to create a default KMS key for your account. This key is used to encrypt the
Amazon EBS volumes that are created in your account, unless you specify a different KMS
key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these
features do not affect the encryption of the secrets that are stored in the Kubernetes etcd
key-value store. Moreover, using the default KMS key or the default encryption setting
would not provide the same level of control and security as using a custom KMS key and
enabling the Amazon EKS KMS secrets encryption feature. References:
Encrypting secrets used in Amazon EKS
What Is AWS Key Management Service?
What Is AWS Secrets Manager?
Amazon EBS CSI driver
Encryption at rest
Question # 8
A retail company has several businesses. The IT team for each business manages its ownAWS account. Each team account is part of an organization in AWS Organizations. Eachteam monitors its product inventory levels in an Amazon DynamoDB table in the team'sown AWS account.The company is deploying a central inventory reporting application into a shared AWSaccount. The application must be able to read items from all the teams' DynamoDB tables.Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account.Configure the application to use the correct secret from Secrets Manager to authenticateand read the DynamoDB table. Schedule secret rotation for every 30 days. B. In every business account, create an 1AM user that has programmatic access.Configure the application to use the correct 1AM user access key ID and secret access keyto authenticate and read the DynamoDB table. Manually rotate 1AM access keys every 30days. C. In every business account, create an 1AM role named BU_ROLE with a policy that givesthe role access to the DynamoDB table and a trust policy to trust a specific role in theinventory application account. In the inventory account, create a role named APP_ROLEthat allows access to the STS AssumeRole API operation. Configure the application to useAPP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table. D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identitycertificates to authenticate DynamoDB. Configure the application to use the correctcertificate to authenticate and read the DynamoDB table.
Answer: C
Explanation: This solution meets the requirements most securely because it uses IAM
roles and the STS AssumeRole API operation to authenticate and authorize the inventory
application to access the DynamoDB tables in different accounts. IAM roles are more
secure than IAM users or certificates because they do not require long-term credentials or
passwords. Instead, IAM roles provide temporary security credentials that are automatically
rotated and can be configured with a limited duration. The STS AssumeRole API operation
enables you to request temporary credentials for a role that you are allowed to assume. By
using this operation, you can delegate access to resources that are in different AWS
accounts that you own or that are owned by third parties. The trust policy of the role defines
which entities can assume the role, and the permissions policy of the role defines which
actions can be performed on the resources. By using this solution, you can avoid hardcoding
credentials or certificates in the inventory application, and you can also avoid
storing them in Secrets Manager or ACM. You can also leverage the built-in security
features of IAM and STS, such as MFA, access logging, and policy conditions.
References: IAM Roles
STS AssumeRole
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Question # 9
A company built an application with Docker containers and needs to run the application inthe AWS Cloud The company wants to use a managed sen/ice to host the applicationThe solution must scale in and out appropriately according to demand on the individualcontainer services The solution also must not result in additional operational overhead orinfrastructure to manageWhich solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes.
Answer: A,B
Explanation: These options are the best solutions because they allow the company to run
the application with Docker containers in the AWS Cloud using a managed service that
scales automatically and does not require any infrastructure to manage. By using AWS
Fargate, the company can launch and run containers without having to provision, configure,
or scale clusters of EC2 instances. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed. By using Amazon
ECS or Amazon EKS, the company can choose the container orchestration platform that
suits its needs. Amazon ECS is a fully managed service that integrates with other AWS
services and simplifies the deployment and management of containers. Amazon EKS is a
managed service that runs Kubernetes on AWS and provides compatibility with existing
Kubernetes tools and plugins.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the
containers. This option is not feasible because AWS Lambda does not support running
Docker containers directly. Lambda functions are executed in a sandboxed environment
that is isolated from other functions and resources. To run Docker containers on Lambda,
the company would need to use a custom runtime or a wrapper library that emulates the
Docker API, which can introduce additional complexity and overhead.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
This option is not optimal because it requires the company to manage the EC2 instances
that host the containers. The company would need to provision, configure, scale, patch,
and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes. This option is not ideal because it requires the company to manage the EC2
instances that host the containers. The company would need to provision, configure, scale,
patch, and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
References:
1 AWS Fargate - Amazon Web Services
2 Amazon Elastic Container Service - Amazon Web Services
3 Amazon Elastic Kubernetes Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 10
A company uses Amazon S3 as its data lake. The company has a new partner that mustuse SFTP to upload data files A solutions architect needs to implement a highly availableSFTP solution that minimizes operational overhead.Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publiclyaccessible endpoint Choose the S3 data lake as the destination B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpointURL to the new partner Share the S3 File Gateway endpoint with the newpartner C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partnerto upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2instance to upload files to the S3 data lake D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network LoadBalancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instancesto upload files to the S3 data lake.
Answer: A
Explanation: This option is the most cost-effective and simple way to enable SFTP access
to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure
file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled
server with a public endpoint and associate it with your S3 bucket. You can also use AWS
Identity and Access Management (IAM) roles and policies to control access to your S3 data
lake. The service scales automatically to handle any volume of file transfers and provides
high availability and durability. You do not need to provision, manage, or patch any servers
or load balancers.
Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a
hybrid cloud storage service that provides a local file system interface to S3. You can use it
to store and retrieve files as objects in S3 using standard file protocols such as NFS and
SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway
appliance on-premises or on EC2.
Option C is not cost-effective or scalable because it requires launching and managing an
EC2 instance in a private subnet and setting up a VPN connection for the new partner. This
would incur additional costs for the EC2 instance, the VPN connection, and the data
transfer. It would also introduce complexity and security risks to the solution. Moreover, it
would require running a cron job script on the EC2 instance to upload files to the S3 data
lake, which is not efficient or reliable.
Option D is not cost-effective or scalable because it requires launching and managing
multiple EC2 instances in a private subnet and placing a NLB in front of them. This would
incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also
introduce complexity and security risks to the solution. Moreover, it would require running a
cron job script on the EC2 instances to upload files to the S3 data lake, which is not
efficient or reliable. References:
What Is AWS Transfer Family?
What Is Amazon S3 File Gateway?
What Is Amazon EC2?
[What Is Amazon Virtual Private Cloud?]
[What Is a Network Load Balancer?]
Question # 11
A company hosts an application used to upload files to an Amazon S3 bucket Onceuploaded, the files are processed to extract metadata which takes less than 5 seconds Thevolume and frequency of the uploads varies from a few files each hour to hundreds ofconcurrent uploads The company has asked a solutions architect to design a cost-effectivearchitecture that will meet these requirements.What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to tog S3 API calls Use AWS AppSync to process thefiles. B. Configure an object-created event notification within the S3 bucket to invoke an AWSLambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process thefiles uploaded to Amazon S3 Invoke an AWS Lambda function to process the files.
Answer: B
Explanation: This option is the most cost-effective and scalable way to process the files
uploaded to S3. AWS CloudTrail is used to log API calls, not to trigger actions based on
them. AWS AppSync is a service for building GraphQL APIs, not for processing files.
Amazon Kinesis Data Streams is used to ingest and process streaming data, not to send
data to S3. Amazon SNS is a pub/sub service that can be used to notify subscribers of
events, not to process files. References:
Using AWS Lambda with Amazon S3
AWS CloudTrail FAQs
What Is AWS AppSync?
[What Is Amazon Kinesis Data Streams?]
[What Is Amazon Simple Notification Service?]
Question # 12
A company runs analytics software on Amazon EC2 instances The software accepts jobrequests from users to process data that has been uploaded to Amazon S3 Users reportthat some submitted data is not being processed Amazon CloudWatch reveals that theEC2 instances have a consistent CPU utilization at or near 100% The company wants toimprove system performance and scale the system based on user load.What should a solutions architect do to meet these requirements?
A. Create a copy of the instance Place all instances behind an Application Load Balancer B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS) Configurean EC2 Auto Scaling group based on queue size Update the software to read from the queue.
Answer: D
Explanation: This option is the best solution because it allows the company to decouple
the analytics software from the user requests and scale the EC2 instances dynamically
based on the demand. By using Amazon SQS, the company can create a queue that
stores the user requests and acts as a buffer between the users and the analytics software.
This way, the software can process the requests at its own pace without losing any data or
overloading the EC2 instances. By using EC2 Auto Scaling, the company can create an
Auto Scaling group that launches or terminates EC2 instances automatically based on the
size of the queue. This way, the company can ensure that there are enough instances to
handle the load and optimize the cost and performance of the system. By updating the
software to read from the queue, the company can enable the analytics software to
consume the requests from the queue and process the data from Amazon S3.
A. Create a copy of the instance Place all instances behind an Application Load Balancer.
This option is not optimal because it does not address the root cause of the problem, which
is the high CPU utilization of the EC2 instances. An Application Load Balancer can
distribute the incoming traffic across multiple instances, but it cannot scale the instances
based on the load or reduce the processing time of the analytics software. Moreover, this
option can incur additional costs for the load balancer and the extra instances.
B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference the
endpoint. This option is not effective because it does not solve the issue of the high CPU
utilization of the EC2 instances. An S3 VPC endpoint can enable the EC2 instances to
access Amazon S3 without going through the internet, which can improve the network
performance and security. However, it cannot reduce the processing time of the analytics
software or scale the instances based on the load.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and
more memory. Restart the instances. This option is not scalable because it does not
account for the variability of the user load. Changing the instance type to a more powerful
one can improve the performance of the analytics software, but it cannot adjust the number
of instances based on the demand. Moreover, this option can increase the cost of the
system and cause downtime during the instance modification.
References:
1 Using Amazon SQS queues with Amazon EC2 Auto Scaling - Amazon EC2 Auto
Scaling
2 Tutorial: Set up a scaled and load-balanced application - Amazon EC2 Auto
Scaling
3 Amazon EC2 Auto Scaling FAQs
Question # 13
A company is deploying an application that processes streaming data in near-real time Thecompany plans to use Amazon EC2 instances for the workload The network architecturemust be configurable to provide the lowest possible latency between nodesWhich combination of network solutions will meet these requirements? (Select TWO)
A. Enable and configure enhanced networking on each EC2 instance B. Group the EC2 instances in separate accounts C. Run the EC2 instances in a cluster placement group D. Attach multiple elastic network interfaces to each EC2 instance E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer: A,C
Explanation: These options are the most suitable ways to configure the network
architecture to provide the lowest possible latency between nodes. Option A enables and
configures enhanced networking on each EC2 instance, which is a feature that improves
the network performance of the instance by providing higher bandwidth, lower latency, and
lower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or Elastic
Fabric Adapter (EFA) to provide direct access to the network hardware. You can enable
and configure enhanced networking by choosing a supported instance type and a
compatible operating system, and installing the required drivers. Option C runs the EC2
instances in a cluster placement group, which is a logical grouping of instances within a
single Availability Zone that are placed close together on the same underlying hardware.
Cluster placement groups provide the lowest network latency and the highest network
throughput among the placement group options. You can run the EC2 instances in a
cluster placement group by creating a placement group and launching the instances into it.
Option B is not suitable because grouping the EC2 instances in separate accounts does
not provide the lowest possible latency between nodes. Separate accounts are used to
isolate and organize resources for different purposes, such as security, billing, or
compliance. However, they do not affect the network performance or proximity of the
instances. Moreover, grouping the EC2 instances in separate accounts would incur
additional costs and complexity, and it would require setting up cross-account networking
and permissions.
Option D is not suitable because attaching multiple elastic network interfaces to each EC2
instance does not provide the lowest possible latency between nodes. Elastic network
interfaces are virtual network interfaces that can be attached to EC2 instances to provide
additional network capabilities, such as multiple IP addresses, multiple subnets, or
enhanced security. However, they do not affect the network performance or proximity of the
instances. Moreover, attaching multiple elastic network interfaces to each EC2 instance
would consume additional resources and limit the instance type choices. Option E is not suitable because using Amazon EBS optimized instance types does not
provide the lowest possible latency between nodes. Amazon EBS optimized instance types
are instances that provide dedicated bandwidth for Amazon EBS volumes, which are block
storage volumes that can be attached to EC2 instances. EBS optimized instance types
improve the performance and consistency of the EBS volumes, but they do not affect the
network performance or proximity of the instances. Moreover, using EBS optimized
instance types would incur additional costs and may not be necessary for the streaming