일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- 부산입국
- FIDO 환불
- 레노보노트북
- BC렌트
- 외래키설정
- database연결
- 벤쿠버렌트
- 프로그래머스
- 자바
- Java
- 파이도 환불
- 백준알고리즘
- codility
- binaray_gap
- Linux
- FK 설정
- Lesson3
- 캐나다워홀
- 데이터의 무결성
- 설탕문제
- QA엔지니어
- 1463번
- 벤쿠버 렌트
- 엔테크서비스
- 리눅스
- 언마운트
- IntelliJ
- FLEX5
- Lesson2
- 벤쿠버집구하기
- Today
- Total
대충이라도 하자
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.25) 본문
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.25)
Sueeeeee71. A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand steaming?
- A. Amazon CloudFront
- B. AWS Global Accelerator
- C. Amazon Route S3
- D. Amazon S3 Transfer Acceleration
- real time
-global online audience
=> You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin. One way you can set up video workflows in the cloud is by using CloudFront together with AWS Media Services.
72. A company has a three-tier image-sharing application. It uses an Amazon EC2 instance for the front-end layer, another for the backend tier, and a third for the
MySQL database. A solutions architect has been tasked with designing a solution that is highly available, and requires the least amount of changes to the application
Which solution meets these requirements?
- A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users' images.
- B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve users' images.
- C. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve users' images.
- D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users' images.
- 3개의 티어(프론트, 백엔드, 데이터베이스)
- 고가용성, 그러나 변화는 최소하
=> "Highly available": Multi-AZ
"least amount of changes to the application": Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring
73. A solutions architect is designing a system to analyze the performance of financial markets while the markets are closed. The system will run a series of compute- intensive jobs for 4 hours every night. The time to complete the compute jobs is expected to remain constant, and jobs cannot be interrupted once started. Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
- A. Spot Instances
- B. On-Demand Instances
- C. Standard Reserved Instances
- D. Scheduled Reserved Instances
=> INCORRECT: "Standard Reserved Instances" is incorrect as the workload only runs for 4 hours a day this would be more expensive.
INCORRECT: "On-Demand Instances" is incorrect as this would be much more expensive as there is no discount applied.
INCORRECT: "Spot Instances" is incorrect as the workload cannot be interrupted once started. With Spot instances workloads can be terminated if the Spot price changes or capacity is required.
=> BUT!! No more capacity for Scheduled Researved Instances
=> Question is old. Its either on demand or standard reserved instances since scheduled ones are not available anymore. since app will last for 1 year- C seems to be correct option presently.
74. A company built a food ordering application that captures user data and stores it for future analysis. The application's static front end is deployed on an Amazon
EC2 instance. The front-end application sends the requests to the backend application running on separate EC2 instance. The backend application then stores the data in Amazon RDS.
What should a solutions architect do to decouple the architecture and make it scalable?
- A. Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the backend application. The backend application will process and store the data in Amazon RDS.
- B. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic, and process and store the data in Amazon RDS.
- C. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
- D. Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
- static front end - Amazon EC2 instance
- 데이터는 Amazon RDS에 저장
- decouple and scalable
=> Static front end, so you want to use S3 for that so automatically rule out C. Key word here is "decouple", any time you see that look for SQS.
=> API gateway is also serverless and therefore scalable.
75. A solutions architect needs to design a managed storage solution for a company's application that includes high-performance machine learning. This application runs on AWS Fargate, and the connected storage needs to have concurrent access to files and deliver high performance.
Which storage option should the solutions architect recommend?
- A. Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3.
- B. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre.
- C. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon EFS.
- D. Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM role that allows Fargate to communicate with Amazon EBS.
=> B or C? very controversial , official is B
- high performance machine learning
- AWS Fargate ( serverless computing engine) - ECS와 EKS, EFS와 호환
- concurrent access ( 동시 접근)
=> high-performance machine learning -> FSx for Lustre, but isn't supported by Fargate, but "establish an IAM role that allows Fargate to communicate with FSx for Lustre." ?
=> Amazon EFS enables customers to persist data and state from their containers and serverless functions, providing fully managed, elastic, highly-available, scalable, and "HIGH-PERFORMANCE", cloud-native shared file systems. These same attributes are shared by Amazon Elastic Container Service (Amazon ECS), Amazon Kubernetes Service (Amazon EKS), AWS Fargate, and AWS Lambda, so developers don’t need to design for these traits, the services are simply ready for modern application development with data persistence. Amazon EFS allows data to be persisted separately from compute, and enables applications to have cross-AZ availability and durability. Amazon EFS provides a shared persistence layer that allows stateful applications to elastically scale up and down, such as for DevOps, web serving, web content systems, media processing, MACHINE LEARNING, analytics, search index, and stateful microservices applications. ?
=> EFS is supported by Fargate.
76. A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
- A. Use Amazon Athena with Amazon S3.
- B. Use Amazon API Gateway with AWS Lambda.
- C. Use Amazon QuickSight with Amazon Redshift.
- D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.
=> very controversial between A and D
- 데이터를 분석 플랫폼에서 활용하고 싶다. (existing analytics platform!!!)
- accessible from the REST AP
- storing and retrieving location data
=> Not A - Athena provides REST API to run queries and does not expose data points as asked in the question
Not B because you cannot ingest and store data points using lambda.
Not C because Quicksight is an analytics service, needs data as input
D is the correct answer because - 1. it can ingest data and not only store the data points but can expose them as REST API
=> 데이터를 저장한다는 것의 요구를 맞추려면 kinesis data analytics는 데이터를 저장할 수는 없다!!! 그래서 답이 D가 아니라 A가 됨
=> Need to store data points for analysis, and only Ans:A (S3) and Ans:C (redshift) have a storage component mentioned, so that rules out B and D. Also the company wants to use these data points in its own existing analytics platform - means no analystics required in AWS soution, so D is wrong. C mentions QuickSight which is used for creating dashboards, which is not required so can rule that out, also Need to query the data which is stored in S3 so that it can be analysed by the companies own analystics tool, which is what Athena does from S3 using SQL..Athena is an interactive query service, not an analystics platform, so it will extract data from S3 ready for post analysis The data points should be accessible from REST API, which is possible for Athena
77. A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.
What should the solutions architect recommend?
- A. Leverage Amazon CloudFront with the ALB endpoint as the origin.
- B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
- C. Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.
- D. Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.
- very controversial between B and C, but I believe the answer is C.
- resilient against malicious internet activity and attacks ( 악의 있는 인터넷 공격 등에 탄력있게 반응)
- 보안, 노출에 대한 대한 protect -> 방화벽인 AWS WAF , but
WAF only doesn't cover the whole requirement :
"resilient against malicious internet activity and attacks" = AWS Shield ( for DDoS protection)
"protect against new common vulnerabilities and exposures" = AWS WAF
So answer is WAF+Shield = Shield Advanced : AWS Shield Advanced includes AWS WAF in its priced subscription (Shield Standard doesn't)
=> AWS Shield Advanced includes AWS WAF at no additional cost allowing you to customize any application layer mitigation.
78. A company has an application that calls AWS Lambda functions. A code review shows that database credentials are stored in a Lambda function's source code, which violates the company's security policy. The credentials must be securely stored and must be automatically rotated on an ongoing basis to meet security policy requirements.
What should a solutions architect recommend to meet these requirements in the MOST secure manner?
- A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password.
- B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password.
- C. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically rotate the uploaded password.
- D. Move the database password to an environment variable that is associated with the Lambda function. Retrieve the password from the environment variable by invoking the function. Create a deployment script to automatically rotate the password.
- AWS lamda function
- 데이터베이스 credentials(자격조건)이 람다 소스 코드에 저장 -> 회사의 보안 정책을 어기는 것
- most secure manner
=> You can now use AWS Secrets Manager to rotate credentials for Oracle, Microsoft SQL Server, or MariaDB databases hosted on Amazon Relational Database Service (Amazon RDS) automatically
79. A company is managing health records on-premises. The company must keep these records indefinitely, disable any modifications to the records once they are stored, and granularly audit access at all levels. The chief technology officer (CTO) is concerned because there are already millions of records not being used by any application, and the current infrastructure is running out of space. The CTO has requested a solutions architect design a solution to move existing data and support future records.
Which services can the solutions architect recommend to meet these requirements?
- A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
- B. Use AWS Storage Gateway to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
- C. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
- D. Use AWS Storage Gateway to move existing data to AWS. Use Amazon Elastic Block Store (Amazon EBS) to store existing and new data. Enable Amazon S3 object lock and enable Amazon S3 server access logging.
- 기업 서버에서 건강 기록을 관리 중
- 현재 데이터를 옮기고 미래 레코드도 지원
=> Need a solution to move existing data and support future records -> so, AWS DataSync should be used for migration.
Need granular audit access at all levels -> so, Data Events should be used in CloudTrail, Management Events is enabled by default.
=> Eliminate storage gateway as the records are of millions and the company wants to store whole data in AWS. Now, D is wrong, EBS and S3 - why? You are left with s3 options. And .. AWS CloudTrail now supports Amazon S3 Data Events. You can now record all API actions on S3 Objects and receive detailed information such as the AWS account of the caller, IAM user role of the caller, time of the API call, IP address of the API, and other details. This is the requirement of the solution. Hence, A is the correct answer.
80. A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The company would rarely need to access this copy. The storage solution's cost should be minimal.
Which storage solution meets these requirements?
- A. S3 Standard
- B. S3 Intelligent-Tiering
- C. S3 Standard-Infrequent Access (S3 Standard-IA)
- D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
- 기업 서버의 데이터셋의 카피를 위해 Amazon S3를 사용하고 싶음
- 이 카피에 접근할 일은 거의 없을 것
- 비용은 최소화
=> Least costs. There is no mention of resilience so no need for S3 Standard IA
=> S3 One Zone-IA is intended for use cases with infrequently accessed data that is re-creatable, such as storing secondary backup copies of on-premises data or for storage that is already replicated in another AWS Region for compliance or disaster recovery purposes.