대충이라도 하자

Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.26) 본문

꼬꼬마 개발자 노트/AWS SAA-C02

Amazon's AWS Certified Solutions Architect - Associate SAA-C02 (2021.10.26)

Sueeeeee
반응형

81. A company's operations teams have an existing Amazon S3 bucket configured to notify an Amazon SQS queue when new object is created within the bucket. The development team also wants to receive events when new objects are created. The existing operations team workflow must remain intact.
Which solution would satisfy these requirements?

  • A. Create another SQS queue. Update the S3 events in bucket to also update the new queue when a new object is created.
  • B. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 update this queue when a new object is created.
  • C. Create an Amazon SNS topic and SQS queue for the Update. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS.
  • D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic Add subscription for both queue in the topic.

=> "configured to notify an Amazon SQS queue when new object is created within the bucket" means SNS via Email, SMS

=>SNS push message to subscription, consumer not poll from SNS

 

82. An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?

  • A. Use a VPC endpoint for DynamoDB.
  • B. Use a NAT gateway in a public subnet.
  • C. Use a NAT instance in a private subnet.
  • D. Use the internet gateway attached to the VPC.

- AWS network를 떠나지 않으면 가장 secure 방법

=> VPC endpoint for making sure it does not leave AWS network.

=> VPC Enpoint An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services. AWS PrivateLink access over Inter-Region VPC Peering.

83. A company built an application that lets users check in to places they visit, rank the places, and add reviews about their experiences. The application is successful with a rapid increase in the number of users every month.
The chief technology officer fears the database supporting the current Infrastructure may not handle the new load the following month because the single Amazon
RDS for MySQL instance has triggered alarms related to resource exhaustion due to read requests.
What can a solutions architect recommend to prevent service Interruptions at the database layer with minimal changes to code?

  • A. Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment.
  • B. Create an Amazon EMR cluster and migrate the data to a Hadoop Distributed File System (HDFS) with a replication factor of 3.
  • C. Create an Amazon ElastiCache cluster and redirect all read-only traffic to the cluster. Set up the cluster to be deployed in three Availability Zones.
  • D. Create an Amazon DynamoDB table to replace the RDS instance and redirect all read-only traffic to the DynamoDB table. Enable DynamoDB Accelerator to offload traffic from the main table.

=> The reason for A over C is Read replica: dynamic content ElastiCache: static content The location information is constantly updated, so choose A over C.

=> A for sure; the other options don't satisfy the requirements for RDS improvement

84. A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?

  • A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
  • B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
  • C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
  • D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

- 비용 최소화

- 이 파일들 저장할 필요는 거의 없음

- 최대 5분 안에 필요할 때 데이터 가지고 올 수 있어야 함

=> A sure, you can use Expedited retrievals to access data in 1 – 5 minutes for a flat rate of $0.03 per GB retrieved. Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required

=>Answer is A,

S3 Glacier as that is the cheapest option and also satisfies the retrieval requirement

S3 One Zone - Infrequent Access * - For re-createable infrequently accessed data that needs millisecond access All Storage / Month $0.01 per GB

S3 Glacier ** - For long-term backups and archives with retrieval option from 1 minute to 12 hours All Storage / Month $0.004 per GB

S3 One Zone - Infrequent Access * - For re-createable infrequently accessed data that needs millisecond access All Storage / Month $0.01 per GB

 

85. A company has created a VPC with multiple private subnets in multiple Availability Zones (AZs) and one public subnet in one of the AZs. The public subnet is used to launch a NAT gateway. There are instances in the private subnets that use a NAT gateway to connect to the internet. In case of an AZ failure, the company wants to ensure that the instances are not all experiencing internet connectivity issues and that there is a backup plan ready.
Which solution should a solutions architect recommend that is MOST highly available?

  • A. Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways.
  • B. Create an Amazon EC2 NAT instance in a new public subnet. Distribute the traffic between the NAT gateway and the NAT instance.
  • C. Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.
  • D. Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.

- highly available

=>NAT gateways are highly available than instances. So not B & D  Tie bet'n A & C. A looks good but C provide more HA than A , and that is what question asks MOST highly available.

 

86. A healthcare company stores highly sensitive patient records. Compliance requires that multiple copies be stored in different locations. Each record must be stored for 7 years. The company has a service level agreement (SLA) to provide records to government agencies immediately for the first 30 days and then within
4 hours of a request thereafter.
What should a solutions architect recommend?

  • A. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy.
  • B. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier using a lifecycle policy.
  • C. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Achieve using a lifecycle policy.
  • D. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy.

- 아주 민감한 정보를 저장하고 있음

- 다양한 복사본이 다른 로케잉션에 저장되어야 함

- 각 각의 정보는 7년 정도 저장될 것

- 정부 기관에 빠르게 제공

=> if question asks if Retrivel time is 12 hours or more(instead of 4 hours), then asnwer would be Deep Archieve.

=> B & D - Irrelevant. CORS is about resource sharing in the context web hosting. CORS is a mechanism that allow you to override 'same-origin-policy' of browsers which prevent scripts of one origin to access private data hosted on anohter origin. 

87. A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?

  • A. Use a scheduled AWS Lambda function and execute a script remotely on all EC2 instances to send data to the audit system.
  • B. Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.
  • C. Use an EC2 Auto Scaling launch configuration to execute a custom script through user data to send data to the audit system when instances are launched and terminated.
  • D. Execute a custom script on the instance operating system to send data to the audit system. Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and is terminated.

- centralize information

=> You can add a lifecycle hook to your Auto Scaling group so that you can perform custom actions when instances launch or terminate

=> A. Use a scheduled AWS Lambda function and execute a script remotely on all EC2 instances to send data to the audit system ( Note: You can't use Lambda as Configuration management tool )

C. Use an EC2 Auto Scaling launch configuration to execute a custom script through user data to send data to the audit system when instances are launched and terminated. ( With userdate we can execute commands on ec2 only on the creation time, we can't execute commands on instance using userdata while the instance is getting terminated ) 

D. Execute a custom script on the instance operating system to send data to the audit system. Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and is terminated. ( You can't tell ASG to execute script on ec2 ).

 

88. A company recently implemented hybrid cloud connectivity using AWS Direct Connect and is migrating data to Amazon S3. The company is looking for a fully managed solution that will automate and accelerate the replication of data between the on-premises storage systems and AWS storage services.
Which solution should a solutions architect recommend to keep the data private?

  • A. Deploy an AWS DataSync agent for the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.
  • B. Deploy an AWS DataSync agent for the on-premises environment. Schedule a batch job to replicate point-in-time snapshots to AWS.
  • C. Deploy an AWS Storage Gateway volume gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in- time snapshots to AWS.
  • D. Deploy an AWS Storage Gateway file gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in-time snapshots to AWS.

=> You can use AWS DataSync with your Direct Connect link to access public service endpoints or private VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and AWS services does not traverse the public internet or need public IP addresses, increasing the security of data as it is copied over the network.

89. A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within the next month. The company's current network connection allows up to 100 Mbps uploads for this purpose during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?

  • A. Use AWS Snowmobile to ship the data to AWS.
  • B. Order multiple AWS Snowball devices to ship the data to AWS.
  • C. Enable Amazon S3 Transfer Acceleration and securely upload the data.
  • D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

=> To migrate large datasets of 10PB or more in a single location, you should use Snowmobile.

=> 1 Gbps connection can transfer 10TB data in one day, here we have 100Mbs line, that can be used during nights only for data transfer.

90. A public-facing web application queries a database hosted on an Amazon EC2 instance in a private subnet. A large number of queries involve multiple table joins, and the application performance has been degrading due to an increase in complex queries. The application team will be performing updates to improve performance.
What should a solutions architect recommend to the application team? (Choose two.)

  • A. Cache query data in Amazon SQS
  • B. Create a read replica to offload queries
  • C. Migrate the database to Amazon Athena
  • D. Implement Amazon DynamoDB Accelerator to cache data.
  • EMigrate the database to Amazon RDS

- 퍼포먼스 향상

=> its already mention that the tables aree joint thats only the case when you use the structured db with unique key incase of dynamo db you dont need structure of table and its not necessary that all the records have the same attributes .. so undoubtedly its for RDS

=> A is obviously wrong since SQS doesn't cache data C is wrong as Athena is used to query data in S3 D is wrong as the question mentions table joins implying relational database; Dynamo is non-relational

 

반응형
Comments