일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- FLEX5
- 데이터의 무결성
- 1463번
- 벤쿠버렌트
- 백준알고리즘
- codility
- 자바
- 부산입국
- 레노보노트북
- QA엔지니어
- 엔테크서비스
- FIDO 환불
- IntelliJ
- Lesson2
- BC렌트
- Lesson3
- 언마운트
- 설탕문제
- 벤쿠버 렌트
- binaray_gap
- database연결
- 외래키설정
- 파이도 환불
- 캐나다워홀
- Linux
- 프로그래머스
- 벤쿠버집구하기
- 리눅스
- FK 설정
- Java
- Today
- Total
대충이라도 하자
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.20) 본문
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.20)
Sueeeeee41. A financial services company has a web application that serves users in the United States and Europe. The application consists of a database tier and a web server tier. The database tier consists of a MySQL database hosted in us-east-1. Amazon Route 53 geoproximity routing is used to direct traffic to instances in the closest Region. A performance review of the system reveals that European users are not receiving the same level of query performance as those in the United
States.
Which changes should be made to the database tier to improve performance?
- A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
- B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions.
- C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance.
- D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.
- 미국과 유럽에서 사용자 보유하는 파이낸셜 회사
- 데이터베이스 티어와 웹 서비스 티어로 구성
- 데이터베이스 티어는 mysql database(us-east-1)
- Amazon Route 53 geoproximity routing은 제일 가까운 지역의 인스턴스로 직접적 traffic 을 위해 사용되어짐
- 미국과 유럽 간의 성능 차이 존재
=> A--> Amazon RDS has regional significance. so, ruled out because of multi-region.
B -> Changing baseline Db from MySql to DynamoDB is drastic design change.
C -> Two database means inconsistent data.
D- > only solves the multi-region and performance issue with read replica.
42. A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.
What should a solutions architect do to accomplish this?
- A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
- B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
- C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
- D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.
- 기업 서버 내의 static 웹사이트 호스트하고 있고 AWS로 웹사이드를 migrate하고 싶음
- 전 세계의 사용자가 가능한 빨리 웹사이트를 로드해야 한다.
- cost-effective 해결책도
=> In question basic key words are "Static WebSite", and "load as quickly as possible around the world" S3 + Cloudfront(Content Delivery Network) is a good fit for this and also provide DDoS Protection Shield, AWS Web Application Firewall and enhance the security with CloudFront OAI(Origin Access Identity).
=> The most cost-effective option is to migrate the website to an Amazon S3 bucket and configure that bucket for static website hosting. To enable good performance for global users the solutions architect should then configure a CloudFront distribution with the S3 bucket as the origin. This will cache the static content around the world closer to users.
43. A solutions architect is designing storage for a high performance computing (HPC) environment based on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require shared storage and heavy computing.
Which storage option would be the optimal solution?
- A. Amazon Elastic File System (Amazon EFS)
- B. Amazon FSx for Lustre
- C. Amazon EC2 instance store
- D. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1)
- Amazon 리눅스를 기반응로 한 HPC 환경에 스토리지
- workload는 공유 스토리지와 heavy computing을 필요로 하는, 많은 양의 엔지니어링 드로잉을 저장하고 프로세스해야 함
=> HPC + Linux = FSx for Lustre
=> Lustre actually stands for Linux and cluster - hence it’s perfect HPC
44. A company is performing an AWS Well-Architected Framework review of an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was install recently to support other AWS services. A solutions architect needs to recommend a new design that would improve the security of the architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?
- A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
- B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory.
- C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance.
- D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instanceג€™s security group to deny public access to Active Directory.
- 보안은 향상, IT스태프에 의한 관리는 최소화
=> Reduce risk = remove AD from that EC2. Minimize admin = remove AD from any EC2 -> use AWS Directory Service
45. A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion.
Which action will accomplish this?
- A. Enable Amazon S3 versioning.
- B. Enable Amazon S3 Intelligent-Tiering.
- C. Enable an Amazon S3 lifecycle policy.
- D. Enable Amazon S3 cross-Region replication.
- Amazon S3 bucket에 static 웹사이트 호스팅.
- 사고로 삭제 되었을 시, 데이터가 회복될 수 있다는 것을 확실히 하려함
=> Object versioning is a means of keeping multiple variants of an object in the same Amazon S3 bucket. Versioning provides the ability to recover from both unintended user actions and application failures. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.
46. A company's production application runs online transaction processing (OLTP) transactions on an Amazon RDS MySQL DB instance. The company is launching a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application.
How can this be achieved?
- A. Create hourly snapshots of the production RDS DB instance.
- B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
- C. Create multiple RDS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group.
- D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.
- 회사의 production application이 Amazon RDS muysql DB 인스턴스 위 OLTP에서 실행되고 있음
- 회사는 동일한 데이터에 접근할 리포팅 툴을 새롭게 출시
- 이 리포팅 툴은 고가용성이고 원래의 production application의 성능에 영향을 주면 안됨
=> If answer C confuses you, know that RDS is a managed service and AWS handles the scaling part. You don't get to choose an ASG.
47. A company runs an application in a branch office within a small data closet with no virtualized compute resources. The application data is stored on an NFS volume. Compliance standards require a daily offsite backup of the NFS volume.
Which solution meets these requirements?
- A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
- B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
- C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
- D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
- branch office에 virtualized compute resources없이 작은 데이터 클로짓 안에서 앱을 운영
- NFS 볼륨 안에 저장
- 규정 준수는(complican standards) NFS 볼륨 안에 daliy offsite backup 필요
=> 오프사이트 백업은 원격 서버나 오프사이트로 전송되는 미디어에 데이터를 백업하는 방법이다.
- 가장 일반적인 오프사이트 백업의 두 가지 형태는 클라우드 백업과 테이프 백업이다.
=> "small data closet with no virtualized compute resources" - this implies the need for an onsite hardware appliance.
=> https://blog.naver.com/rlagpwlsq789/222189202401 참고
48. A company's web application is using multiple Linux Amazon EC2 instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID).
What should a solutions architect do to meet these requirements?
- A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance.
- B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance.
- C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance.
- D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
- 웹 앰이 리눅스 아마존 EC2를 사용중이고 데이터는 Amazon EBS 볼륨에 저장 중
- resiliency(회복탄력성) 증가 + ACID를 가진 스토리지 제공 을 위한 솔루션
=> EFS when ACID is mentioned
=> storage that complies with atomicity, consistency, isolation, and durability (ACID) goes directly to EFS
49. A security team to limit access to specific services or actions in all of the team's AWS accounts. All accounts belong to a large organization in AWS Organizations.
The solution must be scalable and there must be a single point where permissions can be maintained.
What should a solutions architect do to accomplish this?
- A. Create an ACL to provide access to the services or actions.
- B. Create a security group to allow accounts and attach it to user groups.
- C. Create cross-account roles in each account to deny access to the services or actions.
- D. Create a service control policy in the root organizational unit to deny access to the services or actions.
- 보안팀이 전체 팀의 AWS 계정의 특정 서비스, 액션에 접근 제한을 하려고 한다.
- 모든 계정은 AWS Organizations의 큰 orgainization에 속해 있음
- scalable해야 하고 허가가 지속되는 싱글 포인트가 있어야 함.
=> Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization's access control guidelines.
=> Service Control Policy can help control access in child accounts in a scalable and a single point to maintain permission.
50. A data science team requires storage for nightly log processing. The size and number of logs is unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?
- A. Amazon S3 Glacier
- B. Amazon S3 Standard
- C. Amazon S3 Intelligent-Tiering
- D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
- 데이터 사이언스 팀은 스토리지가 매일 밤 로그 프로세싱 되어야 함
- 로그의 사이즈와 숫자는 모름
-24시간만 지속
- 가장 cost-effective 한 것은?
=> it could be Intelligent tier because we can send files directly to that tier. But reviewing the documentation I remembered that the "minimal billable" is 30 days, that is, even keeping the files for 24 hours, I would charge the price for 30 days! So we have the S3 Standard that does not have a "minimal billable", charging only the days of use (24 hours)
=> The size and number of logs is unknown and will persist for 24 hours only. this is source. logs are auto rotated after 24 hours. In destination they have not specified the time. And IA has default retention of 30 days. and size as 128KB. we dont know these two. So going with S3
=> The S3 Standard-IA and S3 One Zone-IA storage classes are suitable for objects larger than 128 KB that you plan to store for at least 30 days. If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days.
=> S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA storage are charged for a minimum storage duration of 30 days
=> And Objects that are archived to S3 Glacier and S3 Glacier Deep Archive have a minimum 90 days and 180 days of storage, respectively. Objects deleted before 90 days and 180 days incur a pro-rated charge equal to the storage charge for the remaining days
=>A: Glacier cheaper but: has min 90 days retention. i.e. though data deleted within 24 hr, u will endup paying 90 days amnt. So actually costly than s3 standard B: Seems costly: But for this situation It is most cost effective C: No storage class at all. To use tiering first data should be in S3 D: Seems cheaper initially, butAs there is a minimum 30 storage fee for IA and One Zone - IA, though data deleted within 24 hr, pay for 30 days. So actually costlier than S3 standard.