일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- binaray_gap
- 벤쿠버 렌트
- 부산입국
- 1463번
- 캐나다워홀
- 외래키설정
- 설탕문제
- 레노보노트북
- 리눅스
- 벤쿠버집구하기
- Lesson3
- Linux
- QA엔지니어
- Java
- 데이터의 무결성
- FIDO 환불
- 벤쿠버렌트
- FK 설정
- FLEX5
- 자바
- 언마운트
- IntelliJ
- BC렌트
- codility
- database연결
- 프로그래머스
- Lesson2
- 엔테크서비스
- 백준알고리즘
- 파이도 환불
- Today
- Total
대충이라도 하자
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.21) 본문
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.21)
Sueeeeee51. A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
- A. Copy the data so both EBS volumes contain all the documents.
- B. Configure the Application Load Balancer to direct a user to the server with the documents.
- C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.
- D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.
- AWS에 Amazon EC2를 이용해 웹앱을 호스팅 / user-uploaded 문서는 Amazon EBS 볼륨에
- 나은 scalability 와 availabilityy를 위해, 아키텍처를 복제해 두 번째 EC2 인스턴스와 EBS를 다른 Availability Zone에 만들었음, ALB 사용
- 그러고 나자, 매번 웹사이트를 리프레시할 때마다 다른 문서의 subset을 볼 수 있으나 한 번에 모든 문서를 볼 수 없음
- 한 번에 다 보게 하려면?
=> EFS is better suited with multiple instances
=>EBS is local to instances, what instance writes to it is not visible to anyone else. When 2 instances work on the same set of data, they need a shared file system which EFS provides. However synchronization has to handled by application.
52. A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.
What should a solutions architect use to accomplish this?
- A. Server-Side Encryption with keys stored in an S3 bucket
- B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
- C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
- 회사는 Amazon S3를 사용해 이용자들이 업로드한 이미지를 저장하려고 함
- 이 이미지들은 무조건 Amazon S3에 encrypted 되야 함
- keys를 관리하고 회전하기 위해 시간을 쓰고 싶지는 않지만, 누가 그 키에 접근할 수 있는지는 컨트롤 하고 싶음
=>SSE-S3: AWS manages both data key and master key
SSE-KMS: AWS manages data key and you manage master key
SSE-C: You manage both data key and master key
53. A company is running an ecommerce application on Amazon EC2. The application consists of a stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the application's usage. The application requires 50 instances 80% of the time.
Which solution should be used to minimize costs?
- A. Purchase Reserved Instances to cover 250 instances.
- B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.
- C. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances.
- D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.
- Amazon EC2에 이커머스 앱을 운영 중
- 최소 10개에서 최대 250개의 인스턴트를 필요로 하는 stateless 웹 티어로 이루어짐
- 80% 정도는 50개의 인스턴스 필요
- 비용을 최소화하기 위해서는?
=>Using reserved is more cost effective in this case
54. A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
- A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
- B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
- C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.
- D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
- E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.
- internet-facing ALB를 가진 VPC 안에 API를 배포
- API를 클라이언트로 소비하는 앱은 NAT 게이트웨이 뒤 개인 subnet안에 있는 두 번째 계정에 배포되어 있음
- 클라이언트 쪽으로 리퀘스트가 증가할 때 NAT 게이트웨이 비용을 예상보다 높다.
- 해결책은 ALB를 internal하는 것
- 어떤 아키텍처 조합이 NAT 게이트웨이 비용을 낮출 것인가?
=> E is incorrect as API is not a shareable resource by RAM. With RAM you cannot share the API Gateway,
=>AWS Resource Access Manager (RAM) helps you securely share your resources across AWS accounts.
=>We know that it makes no sense to use private link and peering but here it's being asked for possible solution which can be obtained in order to achieve the desired outcome.
=>PrivateLink requires NLB, not ALB, But!! A PrivateLink is used to connect two VPCs. The NLB is not an issue because when you configure a PrivateLink, it implies that you configure a NLB together with it. we can enter the private link through vpc peering. so NLB is no more matter.
55. A solutions architect is tasked with transferring 750 TB of data from an on-premises network-attached file system located at a branch office Amazon S3 Glacier.
The migration must not saturate the on-premises 1 Mbps internet connection.
Which solution will meet these requirements?
- A. Create an AWS site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Transfer the files directly by using the AWS CLI.
- B. Order 10 AWS Snowball Edge Storage Optimized devices, and select an S3 Glacier vault as the destination.
- C. Mount the network-attached file system to an S3 bucket, and copy the files directly. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
- D. Order 10 AWS Snowball Edge Storage Optimized devices, and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
- Amazoon S3 Glacier의 브런치에 있는, 파일시스템이 붙어잇는 기업 서버의 750TB 데이터를 트랜스퍼하려고 함
- migration은 서버의 1mbps 인터넷 연결을 포화하면 안됨
=> To upload existing data to Amazon S3 Glacier (S3 Glacier), you might consider using one of the AWS Snowball device types to import data into Amazon S3, and then move it to the S3 Glacier storage class for archival using lifecycle rules. When you transition Amazon S3 objects to the S3 Glacier storage class, Amazon S3 internally uses S3 Glacier for durable storage at lower cost. Although the objects are stored in S3 Glacier, they remain Amazon S3 objects that you manage in Amazon S3, and you cannot access them directly through S3 Glacier.
=> • Snowball cannot import to Glacier directly • You must use Amazon S3 first, in combination with an S3 lifecycle policy
56. A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and a database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Choose two.)
- A. Create new public and private subnets in the same AZ for high availability.
- B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
- C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
- D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.
- E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
- 회사는 two-tier 앱 아키텍처를 가지고 있음 ( public과 private subnets에서 운영 중)
- 웹앱을 운영 중인 Amazon EC2 인스턴스는 public subnet에 있고 데이터베이스는 private subnet
- 웹앱 인스턴스들과 데이터베이스는 하나의 Availability Zone에 있음
- high availability 를 제공하려면?
=> INCORRECT: "Create new public and private subnets in the same AZ for high availability" is incorrect as this would not add high availability.
INCORRECT: "Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)" is incorrect because the existing servers are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: "Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ" is incorrect because we also need HA for the database layer.
57. A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
- A. Enable a read-only bucket ACL.
- B. Enable versioning on the bucket.
- C. Attach an IAM policy to the bucket.
- D. Enable MFA Delete on the bucket.
- E. Encrypt the bucket using AWS KMS.
=> To prevent or mitigate future accidental deletions, consider the following features:
Enable versioning to keep historical versions of an object.
Enable cross-region replication of objects.
Enable MFA Delete to require multi-factor authentication (MFA) when deleting an object version.
=> C cannot be correct as you cannot attach IAM policies to S3 buckets.
58. An application hosted on AWS is experiencing performance problems, and the application vendor wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.
What is the MOST secure way to do this?
- A. Enable public read on the S3 object and provide the link to the vendor.
- B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
- C. Generate a presigned URL and have the vendor download the log file before it expires.
- D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.
=> A and B providing public link which security concerns. option D is not suitable because here in question it is a vendor user accessing a log file, here user use to access the application which is hosted in AWS he is not the one who has access permission to AWS console management so creating IAM is not feasible.
=> All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects.
When you create a presigned URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method
(GET to download the object) and expiration date and time. The presigned URLs are valid only for the specified duration.
Anyone who receives the presigned URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a presigned URL.
59. A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.
How should security groups be configured in this situation? (Choose two.)
- A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
- B. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
- C. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
- D. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
=> security group is stateful
0.0.0.0 443 -> web app tier
web sg 1433 -> sql database tier
60. A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies.
How should a solutions architect address this issue?
- A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.
- B. Use service control policies to disable IAM activity across all account in the organizational unit.
- C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team.
- D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.
=> {Permission boundaries are for this use case. Be aware that you can assign boundaries only to users and roles, not groups
=> A: INCORRECT - this can be done by default using CloudTrail logs.
B/C: INCORRECT - The company's policy is to allow the developers to make policy modifications D: CORRECT