대충이라도 하자

Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.28) 본문

꼬꼬마 개발자 노트/AWS SAA-C02

Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.28)

Sueeeeee
반응형

101. A company has an Amazon EC2 instance running on a private subnet that needs to access a public website to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connections to it.
How can a solutions architect achieve this objective?

  • A. Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed.
  • B. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAT gateway.
  • C. Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website.
  • D. Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.

- private subnet: public website에 접근해야 함 (업데이트나 패치 다운로드 하기 위해서)

- 하지만 외부 웹사이트가 EC 인스턴스 IP 주소를 보거나 연결하면 안됨

=> A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.

102. A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

  • A. Use AWS Snowball.
  • B. Use AWS DataSync.
  • C. Use a secure VPN connection.
  • D. Use Amazon S3 Transfer Acceleration.

- 20tb 데이터를 AWS 클라우드에 30일 안에

- 15 mps 로 제한, 70% 가용성 초과하면 안됨

=> 1. If there is more than 25Mbps of available bandwidth , S3TA is good option. However, In question company's network bandwidth is limited to 15Mbps.

 2. Given 30days period, We can use Snowball. Snowball has Lower network costs.

103. A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement?

  • A. Use step scaling.
  • B. Use simple scaling.
  • C. Use lifecycle hooks.
  • D. Use scheduled scaling.

- on specific holidays

104. An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon
RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?

  • A. Implement Amazon SNS to store the database calls.
  • B. Implement Amazon ElastiCache to cache the large datasets.
  • C. Implement an RDS for MySQL read replica to cache database calls.
  • D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.

=> Elasticache -> AWS Elasticache If the same read query is performed over and over again

=> Key term is identical datasets from the database it means caching can solve this issue by cached in frequently used dataset from DB

105. A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost.
How can these requirements be met?

  • A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
  • B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
  • C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
  • D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.

=> C is wrong because they already running out of space at on-premises. So why would they store the data again loacally .

=> The solution must allow for immediate retrieval of data at no additional cost. Thus we need to use C, Cached Volumes will just have frequently accessed data. Answer is C

 

106. A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis.
What is the MOST cost-effective storage solution alternative to the current configuration?

  • A. Configure a lifecycle policy to delete the objects after 30 days.
  • B. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days.
  • C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
  • D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

=> Existing solution is retaining the result files for occasional analysis, while suggesting alternative we need to keep this in mind that it dont effect the existing functionality. In opinion Answer is C "S3 Standard IA". Problem with "S3 One Zone IA" is that there is still a risk of result files unavailability if AZ is down

=> the question didn't mentioned HA, and highlighted "MOST cost-effective storage" . refer this, " S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. " Based on the question, only "immeadiately accessible" , didnt asking any requirement like, availability and resilience. Remember another keywords, "analyzed daily for one week". If you are the manager, top management sure ask you justification why require choose the expensive storage with HA and caused monthly bill extra charge to store the operations logs only. If one system is ok, how about 10 branches? still need HA solition store the logs ? I will go with D, MOST cost-effective only.

 

107. A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited time. What should a solutions architect do to securely meet these requirements?

  • A. Enable public access on an Amazon S3 bucket.
  • B. Generate a presigned URL to share with the users.
  • C. Encrypt files using AWS KMS and provide keys to the users.
  • D. Create and assign IAM roles that will grant GetObject permissions to the users.

108. A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an
NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?

  • A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
  • B. Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud.
  • C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
  • D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud.

=> NFS : File Gateway

=>Since it mentions NFS protocols its should use a Storage File Gateway. 

=>AWS Storage Gateway volume gateway uses iSCSI

AWS Storage Gateway file gateway uses NFS

 

109. A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon
S3.
What should a solutions architect recommend to satisfy these requirements?

  • A. Server-side encryption with customer-provided encryption keys
  • B. Client-side encryption with Amazon S3 managed encryption keys
  • C. Server-side encryption with keys stored in AWS key Management Service (AWS KMS)
  • D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

- sensitive user data

=>Server-side:

S3 Managed Keys (SSE-S3)

KMS Managed Keys (SSE-KMS)

Customer Provided Keys (SSE-C)

Client-side:

KMS managed master encryption keys (CSE-KMS)

Customer managed master encryption keys (CSE-C)

 

110. A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted.
Which combination of steps will meet these requirements? (Choose two.)

  • A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
  • B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
  • C. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution.
  • D. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content.
  • E. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.

- static content

- limited set of IP ranges

=>- Use signed URLs or cookies

- Restrict access to content in Amazon S3 buckets => A

- Use AWS WAF web ACLs => B

- Use geo restriction

 





반응형
Comments