일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- 부산입국
- 엔테크서비스
- 외래키설정
- 1463번
- 프로그래머스
- 파이도 환불
- 캐나다워홀
- IntelliJ
- 자바
- Lesson2
- 언마운트
- FK 설정
- codility
- 데이터의 무결성
- database연결
- 벤쿠버렌트
- BC렌트
- FIDO 환불
- 벤쿠버집구하기
- Lesson3
- 레노보노트북
- 설탕문제
- 벤쿠버 렌트
- 리눅스
- FLEX5
- Linux
- binaray_gap
- 백준알고리즘
- QA엔지니어
- Java
- Today
- Total
대충이라도 하자
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.22) 본문
Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.22)
Sueeeeee61. A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an
Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?
- A. Create an Auto Scaling group that uses three instances across each of two Regions.
- B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
- C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
- D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.
- high availability
=> An Auto Scaling group can contain Amazon EC2 instances from multiple Availability Zones within the same Region. However, an Auto Scaling group can't contain instances from multiple Regions.
=> High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don't actually need to specify the instances per AZ.
62. A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years.
The log files will be analyzed by a reporting tool that must access all files concurrently.
Which storage solution meets these requirements MOST cost-effectively?
- A. Amazon Elastic Block Store (Amazon EBS)
- B. Amazon Elastic File System (Amazon EFS)
- C. Amazon EC2 instance store
- D. Amazon S3
- Amazon 리눅스 EC2 인스턴스에 앱을 run하고 있음
- 7년 동안 앱 로그 파일을 유지해야 함
- 로그 파일은 동시에 모든 파일에 접근하는 리포팅 툴에 의해서 분석되어짐
- 가장 비용효율적인 해결책은?
=> "retain all application log files for 7 years" in S3 is MOST cost-effectively
=> Question talks about concurrent access, which EFS supports as well as linux, so the answer should be B
이런 의견도 있지만 7년 동안 비용 효율적으로!!! 그래서 D
63. A media streaming company collects real-time data and stores it in a disk-optimized database system. The company is not getting the expected throughput and wants an in-memory database storage solution that performs faster and provides high availability using data replication.
Which database should a solutions architect recommend?
- A. Amazon RDS for MySQL
- B. Amazon RDS for PostgreSQL.
- C. Amazon ElastiCache for Redis
- D. Amazon ElastiCache for Memcached
- 미디어 스트리밍 회사가 real-time 데이터를 모으고 있고 디스크 최적화 데이터베이스 시스템에 저장
- 예상 throughput(처리율)을 얻지 못하고 있고
- 데이터 복제를 이용한 고각용성을 제공하고 더 빨리 perform 하는 인 메모리 데이터베이스 스토리지 해결책을 원한다.
- 어떤 데이터베이스?
=> Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides submillisecond latency to power internet-scale, real-time applications. Developers can use ElastiCache for Redis as an in-memory nonrelational database.
=> Redis lets you create multiple replicas of a Redis primary. This allows you to scale database reads and to have highly available clusters. Memcached does not.
=> Redis - real-time apps across versatile use cases like gaming, geospatial service, caching, session stores, or queuing, with advanced data structures, replication, and point-in-time snapshot support.
=>
REDIS
• Multi AZ with Auto-Failover
• Read Replicas to scale reads and have high availability
• Data Durability using AOF persistence
• Backup and restore features
MEMCACHED
• Multi-node for partitioning of data (sharding)
• No high availability (replication)
• Non persistent
• No backup and restore
• Multi-threaded architecture
64. A company hosts its product information webpages on AWS. The existing solution uses multiple Amazon C2 instances behind an Application Load Balancer in an
Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate. The company is planning a new product launch and wants to be sure that users from around the world have the best possible experience on the new website.
What should a solutions architect do to meet these requirements?
- A. Redesign the application to use Amazon CloudFront.
- B. Redesign the application to use AWS Elastic Beanstalk.
- C. Redesign the application to use a Network Load Balancer.
- D. Redesign the application to use Amazon S3 static website hosting.
=> CloudFront can help provide the best experience for global users. CloudFront integrates seamlessly with ALB and provides and option to use custom DNS and SSL certs.
=> A static content + accessed by users around the world = CloudFront
65.
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
- A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
- B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
- C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
- D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
=> In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue
=> SQS helps design a loosely coupled architecture with an ability to store the message durably. Auto Scaling Group with target scaling policy based on the number of items in SQS would help scale the application nodes as per the demand.
66. A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
- A. Attach a resource-based policy to the S3 bucket.
- B. Create an IAM user for the application with specific permissions to the S3 bucket.
- C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
- D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
=> Keyword: Privilege Permission + IAM Role AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.
=>If I see IAM Role, I already know thats the correct answer without a doubt.!
67. A company has on-premises servers running a relational database. The current database serves high read traffic for users in different locations. The company wants to migrate to AWS with the least amount of effort. The database solution should support disaster recovery and not affect the company's current traffic flow.
Which solution meets these requirements?
- A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.
- B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica.
- C. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions.
- D. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.
=> AWS Read Replica can support multi-region...
=>The question asks for both read traffic and disaster recovery. The "standby" instance will just stand by and do nothing, so it doesn't help with read traffic. When the primary is down, a read replica can be promoted to the main instance, so it somewhat helps with disaster recovery. In this case though, the read replica becomes its own database
68. A companyג€™s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the applicationג€™s history the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
- A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
- B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
- C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.
- D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.
=> Key line- anticipates traffic
Anticipating=Scheduled action.
69. A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes for an Amazon
RDS table, and deletes -
the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
- A. Use the CreateQueue API call to create a new queue.
- B. Use the AddPermission API call to add appropriate permissions.
- C. Use the ReceiveMessage API call to set an appropriate wait time.
- D. Use the ChangeMessageVisibility API call to increase the visibility timeout.
=> Increasing visibility time out will allow more time for the message to be processed and deleted before it has a chance to become visible again to other subscribers to the SQS queue
=> The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout.
70.
- A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
- B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
- C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
- D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
=>Statement1: Allowing terminate instance ( any ) only if the sourcepIP matches
Statement2: Deny All EC2 actions, if request comes from any region other than us-east-1
=> Second part of the policy says that it denies everything coming from any region except us-east1. That means only us-east1 is allowed. Plus there is a check on the IP.
=> since 10.100.100.1 is mentioned which is a reserved IP and unavailable in the CIDR block so this option becomes invalid