1. Which DNS name can only be resolved within Amazon EC2?
- D. Private DNS name
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html
- An Amazon-provided private (internal) DNS hostname resolves to the private IPv4 address of the instance, and takes the form
ip-
for the us-east-1 region, andprivate-ipv4-address
.ec2.internalip-
for other regions (whereprivate-ipv4-address
.region
.compute.internal
is the reverse lookup IP address). You can use the private DNS hostname for communication between instances in the same network, but we can’t resolve the DNS hostname outside the network that the instance is in.private.ipv4.address
2. Is it possible to access your EBS snapshots?
- B.Yes, through the Amazon EC2 APIs.
- https://aws.amazon.com/ebs/faqs/?nc1=h_ls
- No, snapshots are only available through the Amazon EC2 API.
3. Can the string value of ‘Key’ be prefixed with laws?
- A.No
- C. Yes
- http://docs.aws.amazon.com/cli/latest/reference/rds/list-tags-for-resource.html
- Key -> (string)
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with “aws:” or “rds:”. The string can only contain only the set of Unicode letters, digits, white-space, ‘_’, ‘.’, ‘/’, ‘=’, ‘+’, ‘-‘ (Java regex: “^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$”).
Value -> (string)
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with “aws:” or “rds:”. The string can only contain only the set of Unicode letters, digits, white-space, ‘_’, ‘.’, ‘/’, ‘=’, ‘+’, ‘-‘ (Java regex: “^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$”).
4. In the context of MySQL, version numbers are organized as MySQL version = X.Y.Z. What does X denote here?
- D. major version
- https://aws.amazon.com/rds/mysql/faqs/
- MySQL version = X.Y.Z X = Major version, Y = Release level, Z = Version number within release series.
5. Is decreasing the storage size of a DB Instance permitted?
- C.No
- You cannot decrease storage allocated for a DB instance
6. Select the correct set of steps for exposing the snapshot only to specific AWS accounts
- B.SelectPrivate, enter the IDs of those AWS accounts, and clickSave.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
- expose the snapshot to only specific AWS accounts, choose Private, enter the ID of the AWS account (without hyphens) in the AWS Account Number field, and choose Add Permission. Repeat until you’ve added all the required AWS accounts.
7. Which Amazon storage do you think is the best for my database-style applications that frequently encounter many random reads and writes across the dataset?
- D. Amazon EBS
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
- Amazon EBS is recommended when data must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Amazon EBS is well suited to both database-style applications that rely on random reads and writes, and to throughput-intensive applications that perform long, continuous reads and writes.
8. Does Route 53 support MX Records(Mail Exchange Record)?
- A. Yes.
- http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html#MXFormat
- Each value for an MX resource record set actually contains two values:
- An integer that represents the priority for an email server
- The domain name of the email server
9. Does Amazon RDS for SQL Server currently support importing data into the msdb database?
- A. No
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html#SQLServer.Procedural.Importing.Procedure
- Amazon RDS for Microsoft SQL Server does not support importing data into the
msdb
database.
10. If your DB instance runs out of storage space or file system resources, its status will change to_____ and your DB Instance will no longer be available.
- B. storage-full
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html
- If your database instance runs out of storage, its status will change to storage-full.
11. Is it possible to access your EBS snapshots?
- B. Yes, through the Amazon EC2 APIs.
- https://aws.amazon.com/ebs/faqs/?nc1=h_ls
- Q: Will I be able to access my snapshots using the regular Amazon S3 API?No, snapshots are only available through the Amazon EC2 API.
12. It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.
- C. Replica La
- https://aws.amazon.com/rds/mysql/faqs/
- Q: Which storage engines are supported for use with Amazon RDS for MySQL Read Replicas?Amazon RDS for MySQL Read Replicas require a transactional storage engine and are only supported for the InnoDB storage engine. Non-transactional MySQL storage engines such as MyISAM might prevent Read Replicas from working as intended. However, if you still choose to use MyISAM with Read Replicas, we advise you to watch the Amazon CloudWatch “Replica Lag” metric (available via the AWS Management Console or Amazon CloudWatch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors. The same considerations apply to the use of temporary tables and any other non-transactional engines.
13. By default what are ENIs(Elastic Network Interfaces) that are automatically created and attached to instances using the EC2 console set to do when the attached instance terminates?
- B. Terminate
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ElasticNetworkInterfaces.html
- Changing Termination BehaviorBy default, network interfaces that are automatically created and attached to instances using the console are set to terminate when the instance terminates. However, network interfaces created using the command line interface aren’t set to terminate when the instance terminates.
14. You can use _____ and _____ to help secure the instances in your VPC.
- D. security groups and network ACLs
15. _____ is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance.
- B. Amazon EBS
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
- Amazon EBS volume is a durable, block-level storage device. By default, EBS volumes that are attached to a running instance automatically detach from the instance with their data intact when that instance is terminated.
16. If I want my instance to run on a single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?
- A. dedicated
- https://aws.amazon.com/ec2/purchasing-options/dedicated-instances/
- Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances may share hardware with other instances from the same AWS account that are not Dedicated instances. Pay for Dedicated Instances On-Demand, save up to 70% by purchasing Reserved Instances, or save up to 90% by purchasing Spot Instances.
17. What does Amazon RDS stand for?
- B. Relational Database Service.
18. What does Amazon ELB stand for?
- D. Elastic Load Balancing.
19. Is there a limit to the number of groups you can have?
- C.Yes unless special permission granted
- http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html
- There’s a limit to the number of groups you can have, and a limit to how many groups a user can be in. For more information, see Limitations on IAM Entities and Objects.
- http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html
Groups in an AWS account 100 You can request to increase some of these quotas for your AWS account on the IAM Limit Increase Contact Us Form. Currently you can request to increase the limit on users per AWS account, groups per AWS account, roles per AWS account, instance profiles per AWS account, and server certificates per AWS account.
20. Location of Instances are ____________
- B. based on Availability Zone
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions-availability-zones
- When you launch an instance, you can select an Availability Zone or let us choose one for you. If you distribute your instances across multiple Availability Zones and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests.
21. Is there any way to own a direct connection to Amazon Web Services?
- D. Yes, it’s called Direct Connect.
- http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
- Create a connection in an AWS Direct Connect location to establish a network connection from your premises to an AWS region
- https://aws.amazon.com/directconnect/
- AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS
22. You must assign each server to at least _____________ security group
- C. 1
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
- Your AWS account automatically has a default security group per VPC and per region for EC2-Classic. If you don’t specify a security group when you launch an instance, the instance is automatically associated with the default security group.
23. Does DynamoDB support in-place atomic updates?
- C. Yes
- https://aws.amazon.com/dynamodb/faqs/
- Amazon DynamoDB supports fast in-place updates. You can increment or decrement a numeric attribute in a row using a single API call. Similarly, you can atomically add or remove to sets, lists, or maps. View our documentation for more information on atomic updates.
24. Is there a method in the IAM system to allow or deny access to a specific instance?
- C. No
- http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UseCases.html
- There’s no method in the IAM system to allow or deny access to the operating system of a specific instance.
25. What does Amazon SES stand for?
- B. Simple Email Service
- https://aws.amazon.com/ses/
- Amazon Simple Email Service (Amazon SES) is a cost-effective email service
26. Amazon S3 doesn’t automatically give a user who creates _____ permission to perform other actions on that bucket or object.
- B. a bucket or object
- http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UseCases.html
- Amazon S3 doesn’t automatically give a user who creates a bucket or object permission to perform other actions on that bucket or object. Therefore, in your IAM policies, you must explicitly give users permission to use the Amazon S3 resources they create.
27. Can I attach more than one policy to a particular entity?
- A. Yes always
- http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
- You can attach more than one policy to an entity
28. Fill in the blanks: A_____ is a storage device that moves data in sequences of bytes or bits (blocks). Hint: These devices support random access and generally use buffered I/O.
- D. block device
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
- A block device is a storage device that moves data in sequences of bytes or bits (blocks). These devices support random access and generally use buffered I/O.
29. Can I detach the primary (eth0) network interface when the instance is running or stopped?
- B. No. You cannot
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
- You cannot detach a primary network interface from an instance.
30. What’s an ECU?
- D. Elastic Compute Unit.
- https://aws.amazon.com/ec2/faqs/
- EC2 Compute Unit (ECU)
31. REST or Query requests are HTTP or HTTPS requests that use an HTTP verb (such as GET or POST) and a parameter named Action or Operation that specifies the API you are calling.
- A. FALSE
- http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Query-Requests.html
- Query requests are HTTP or HTTPS requests that use the HTTP verb GET or POST and a Query parameter named
Action
32. What is the charge for the data transfer incurred in replicating data between your primary and standby?
- A. No charge. It is free.
- https://aws.amazon.com/rds/faqs/?nc1=h_ls
- Data transfer – You are not charged for the data transfer incurred in replicating data between your primary and standby. Internet data transfer in and out of your DB instance is charged the same as with a standard deployment.
33. Does AWS Direct Connect allow you access to all Availabilities Zones within a Region?
- C. Yes
- https://aws.amazon.com/directconnect/faqs/
- Q. What Availability Zone(s) can I connect to via this connection?
Each AWS Direct Connect location enables connectivity to all Availability Zones within the geographically nearest AWS region.
34. How many types of block devices does Amazon EC2 support?
- C. 3
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
- Amazon EC2 supports two types of block devices:
- Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance)
- EBS volumes (remote storage devices)
35. What does the “Server Side Encryption” option on Amazon S3 provide?
- C. It encrypts the files that you send to Amazon S3, on the server side.
- https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
- Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket.
36. Making your snapshot public shares all snapshot data with everyone. Can the snapshots with AWS Marketplace product codes be made public?
- Share Snapshot to public NOT POSSIBLE for encrypted snapshots or snapshots with AWS Marketplace product codes
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html#d0e80912
37. What does Amazon EBS stand for?
- D. Elastic Block Store
- https://aws.amazon.com/ebs/
- Amazon Elastic Block Store (EBS)
38.Within the IAM service a GROUP is regarded as a:
- D. A collection of users
- http://docs.aws.amazon.com/IAM/latest/UserGuide/id.html
- An IAM group is a collection of IAM users. You can use groups to specify permissions for a collection of users, which can make those permissions easier to manage for those users
39. A __________ is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources.
- D. permission
- http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_access-management.html
- Permissions are granted through policies that are created and then attached to users, groups, or roles.
40. Do the system resources on the Micro instance meet the recommended configuration for Oracle?
- B. Yes but only for certain situations
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KGlCEGaR-JvcEQbg_g_/micro-instance-oracle-rds
- ‘We recommend that you use db.t1.micro instances with Oracle to test setup and connectivity only”
41. Will I be charged if the DB instance is idle?
- B.Yes
- https://aws.amazon.com/rds/faqs/
- Q: When does billing of my Amazon RDS DB instances begin and end? Billing commences for a DB instance as soon as the DB instance is available. Billing continues until the DB instance terminates, which would occur upon deletion or in the event of instance failure.
42. To help you manage your Amazon EC2 instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of____________
- C. tags
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
- To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags
43. True or False: When you add a rule to a DB security group, you do not need to specify port number or protocol.
- B. TRUE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html
DB Security Group VPC Security Group Controls access to DB instances outside a VPC Controls access to DB instances in VPC. Uses Amazon RDS APIs or Amazon RDS page of the AWS Management Console to create and manage group/rules Uses Amazon EC2 APIs or Amazon VPC page of the AWS Management Console to create and manage group/rules. When you add a rule to a group, you do not need to specify port number or protocol. When you add a rule to a group, you should specify the protocol as TCP, and specify the same port number that you used to create the DB instances (or Options) you plan to add as members to the group. Groups allow access from EC2 security groups in your AWS account or other accounts. Groups allow access from other VPC security groups in your VPC only.
44. Can I initiate a “forced failover” for my Oracle Multi-AZ DB Instance deployment?
- A. Yes
- https://aws.amazon.com/rds/faqs/#46
- Q: Can I initiate a “forced failover” for my Multi-AZ DB instance deployment?Amazon RDS will automatically failover without user intervention under a variety of failure conditions. In addition, Amazon RDS provides an option to initiate a failover when rebooting your instance. You can access this feature via the AWS Management Console or when using the RebootDBInstance API call.
45. Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications.What is the monthly charge for using the public data sets?
- D. There is no charge for using the public data sets
- https://aws.amazon.com/public-datasets/
- AWS hosts a variety of public datasets that anyone can access for free.
46. In the Amazon RDS Oracle DB engine, the Database Diagnostic Pack and the Database Tuning Pack are only available with ______________
- C. Oracle Enterprise Edition
- https://aws.amazon.com/rds/faqs/#46
- Q: Which Enterprise Edition Options are supported on Amazon RDS?Following Enterprise Edition Options are currently supported under the BYOL model:
- Advanced Security (Transparent Data Encryption, Native Network Encryption)
- Partitioning
- Management Packs (Diagnostic, Tuning)
- Advanced Compression
- Total Recall
47. Amazon RDS supports SOAP only through __________.
- D. HTTPS
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/using-soap-api.html
- Amazon RDS supports SOAP only through HTTPS.
48. Without _____, you must either create multiple AWS accounts-each with its own billing and subscriptions to AWS products-or your employees must share the security credentials of a single AWS account.
- D. Amazon IAM
- http://docs.aws.amazon.com/IAM/latest/UserGuide/getting-setup.html
- Without IAM, however, you must either create multiple AWS accounts—each with its own billing and subscriptions to AWS products—or your employees must share the security credentials of a single AWS account.
49. The Amazon EC2 web service can be accessed using the _____ web services messaging protocol. This interface is described by a Web Services Description Language (WSDL) document.
- A. SOAP
50. HTTP Query-based requests are HTTP requests that use the HTTP verb GET or POST and a Query parameter named_____________.
- A. Action
51. Amazon RDS creates an SSL certificate and installs the certificate on the DB Instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The _____ is stored at https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem.
- C. public key
- https://aws.amazon.com/blogs/aws/amazon-rds-sql-server-ssl-support/
- Enabling SSL Support
Here’s all you need to do to enable SSL Support: Download a public certificate key from RDS at https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem
52. What is the name of licensing model in which I can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS?
- A. Bring Your Own License
- https://aws.amazon.com/oracle/
-
Oracle customers can now license Oracle Database 12c, Oracle Fusion Middleware, and Oracle Enterprise Manager to run in the AWS cloud computing environment. Oracle customers can also use their existing Oracle software licenses on Amazon EC2 with no additional license fees. So, whether you’re a long-time Oracle customer or a new user, AWS can get you started quickly.
53. ____________ embodies the “share-nothing” architecture and essentially involves breaking a large database into several smaller databases. Common ways to split a database include 1) splitting tables that are not joined in the same query onto different hosts or 2) duplicating a table across multiple hosts and then using a hashing
algorithm to determine which host receives a given update.
- A. Sharding
- https://forums.aws.amazon.com/thread.jspa?messageID=203052
- ShardingSharding embodies the “share-nothing” architecture and essentially just involves breaking a larger database up into smaller databases. Common ways to split a database are:
54. When you resize the Amazon RDS DB instance, Amazon RDS will perform the upgrade during the next maintenance window. If you want the upgrade to be performed now, rather than waiting for the maintenance window, specify the _____ option.
- D. ApplyImmediately
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html
- To apply changes immediately, you select the Apply Immediately option in the AWS Management Console
55. Does Amazon Route 53 support NS Records?
- A. Yes, it supports Name Service records.
- https://aws.amazon.com/route53/faqs/
- Q. Which DNS record types does Amazon Route 53 support? Amazon Route 53 currently supports the following DNS record types:
- A (address record)
- AAAA (IPv6 address record)
- CNAME (canonical name record)
- MX (mail exchange record)
- NAPTR (name authority pointer record)
- NS (name server record)
- PTR (pointer record)
- SOA (start of authority record)
- SPF (sender policy framework)
- SRV (service locator)
- TXT (text record)
56. The SQL Server _____ feature is an efficient means of copying data from a source database to your DB Instance. It writes the data that you specify to a data file, such as an ASCII file.
- A. bulk copy
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html
Bulk Copy
The SQL Server bulk copy feature is an efficient means of copying data from a source database to your DB instance. Bulk copy writes the data that you specify to a data file, such as an ASCII file. You can then run bulk copy again to write the contents of the file to the destination DB instance.
57. When using consolidated billing there are two account types. What are they?
- A. Paying account and Linked account
- http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
- You sign up for Consolidated Billing in the AWS Billing and Cost Management console, and designate your account as a payer account. Now your account can pay the charges of the other accounts, which are called linked accounts. The payer account and the accounts linked to it are called a Consolidated Billing account family.
58. A __________ is a document that provides a formal statement of one or more permissions.
- A. policy
- http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
- A policy consists of one or more statements, each of which describes one set of permissions
59. In the Amazon RDS which uses the SQL Server engine, what is the maximum size for a Microsoft SQL Server DB Instance with SQL Server Express edition?
- 4 TB per DB
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html
- The minimum storage size for a SQL Server DB instance is 20 GB for the Express and Web Editions, and 200 GB for the Standard and Enterprise Editions.The maximum storage size for a SQL Server DB instance is 4 TB for the Enterprise, Standard, and Web editions, and 300 GB for the Express edition.
60. Regarding the attaching of ENI to an instance, what does ‘warm attach’ refer to?
- A. Attaching an ENI to an instance when it is stopped.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#best-practices-for-configuring-network-interfaces
- You can attach a network interface to an instance
- when it’s running (hot attach),
- when it’s stopped (warm attach), or
- when the instance is being launched (cold attach).
61. If I scale the storage capacity provisioned to my DB Instance by mid of a billing month, how will I be charged?
- B. On a proration basis
- https://aws.amazon.com/rds/faqs/#15
- Q: How will I be charged and billed for my use of Amazon RDS?You pay only for what you use, and there are no minimum or setup fees. You are billed based on:
- Storage (per GB per month) – Storage capacity you have provisioned to your DB instance. If you scale your provisioned storage capacity within the month, your bill will be pro-rated.
62. You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of ___________ days.
- B. 35
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
- You can set the backup retention period to between 1 and 35 days.
63. A Provisioned IOPS volume must be at least __________ GB in size
- D. 10
- This question is deprecated. the new limits are as follows:
https://aws.amazon.com/ebs/details/
But the answer is D. - 4 GB – 16 TB
64. Will I be alerted when automatic failover occurs?
- C. Yes (For RDS)
- A. Only if SNS is configured (for others)
- https://aws.amazon.com/rds/faqs/
- Q: Will I be alerted when automatic failover occurs?
- Yes, Amazon RDS will emit a DB instance event to inform you that automatic failover occurred. You can click the “Events” section of the Amazon RDS Console or use the DescribeEvents API to return information about events related to your DB instance. You can also use Amazon RDS Event Notifications to be notified when specific DB events occur.
65. How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?
- C. Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
- These snapshots can be used to create multiple new EBS volumes or move volumes across Availability Zones.
66. If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?
- D. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP.
- http://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html
- To add a rule that allows inbound SSH trafficThis example enables inbound traffic on TCP port 22 (SSH). If the command succeeds, no output is returned.
- [EC2-VPC] To add a rule that allows inbound SSH trafficThis example enables inbound traffic on TCP port 22 (SSH). Note that you can’t reference a security group for EC2-VPC by name. If the command succeeds, no output is returned.
67. Which of the following features ensures even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer?
- A.Elastic Load Balancing request routing
- B.An Amazon Route 53 weighted routing policy
- C.Elastic Load Balancing cross-zone load balancing
- D.An Amazon Route 53 latency routing policy
- http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html
- Cross-zone load balancing is always enabled for an Application Load Balancer and is disabled by default for a Classic Load Balancer. If cross-zone load balancing is enabled, the load balancer distributes traffic evenly across all registered instances in all enabled Availability Zones.
68. You are using an m1.small EC2 Instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Choose 2 answers
- A.Use an array of EBS volumes.
- B.Enable Multi-AZ mode.
- C.Place the instance in an Auto Scaling Groups
- D.Add an EBS volume and place into RAID 5.
- E.Increase the size of the EC2 Instance.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
- Creating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume. A RAID 1 array offers a “mirror” of your data for extra redundancy. Before you perform this procedure, you need to decide how large your RAID array should be and how many IOPS you want to provision.
69. After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful. Which of the following steps could resolve the issue?
- A. Disabling the Source/Destination Check attribute on the NAT instance
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck
- Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.
70. You are building a solution for a customer to extend their on-premises data center to AWS. The customer requires a 50-Mbps dedicated and private connection to their VPC. Which AWS product or feature satisfies this requirement?
- C. AWS Direct Connect
- https://aws.amazon.com/directconnect/faqs/
- Q. What connection speeds are supported by AWS Direct Connect?
1Gbps and 10Gbps ports are available.Speeds of 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps can be ordered from any APN partners supporting AWS Direct Connect. Read more about APN Partners supporting AWS Direct Connect.
71. When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers
- B. Amazon Elastic Compute Cloud (EC2)
- C.Amazon Elastic Load Balancing
72. You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?
- B. Spot instances
- https://aws.amazon.com/ec2/faqs/
- https://aws.amazon.com/ec2/pricing/
- Pricing :
- On-Demand instances: you pay for compute capacity by the hour with no long-term commitments or upfront payments.
- Spot instances: you bid on spare Amazon EC2 computing capacity for up to 90% off the On-Demand price.
- Reserved Instances: you with a significant discount (up to 75%) compared to On-Demand instance pricing.
- Dedicated Host: Can be purchased as a Reservation for up to 70% off the On-Demand price.
- On-Demande for :
- Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
- Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
- Applications being developed or tested on Amazon EC2 for the first time
- Spot for :
- flexible start and end times,
- only feasible at very low compute prices
- Users with urgent computing needs for large amounts of additional capacity
- Reserved for :
- steady state usage
- require reserved capacity
- Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs
- On-Demande for :
- Can be purchased On-Demand (hourly).
- Can be purchased as a Reservation for up to 70% off the On-Demand price.
73. You have an EC2 Security Group with several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. The new rules apply:
- A. Immediately to all instances in the security group.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#security-group-rules
- You can add and remove rules at any time. Your changes are automatically applied to the instances associated with the security group after a short period.
74. Which services allow the customer to retain full administrative privileges of the underlying EC2 instances? Choose 2 answers
- B. Amazon Elastic Map Reduce
- E. AWS Elastic Beanstalk
- This question should be check if EC2 is visible in each AWS services :
- RDS : There’s no underlying EC2, but DB instances instead
- Amazon ElastiCache: There’s no underlying EC2
- Amazon DynamoDB: No underlying EC2, EC2 are used to replica or other purpose without administrative permission needs
- EMR : Q: What is Amazon EMR?Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).
- Elastic BeanStalk: Q: What are the Cloud resources powering my AWS Elastic Beanstalk application?AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI.
75. A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier?
- A. Elastic Load Balancing, Amazon EC2, and Auto Scaling
- This is about scaling the web tier and no other tier.
What services should you leverage to enable an elastic and scalable web tier? - Exam taking tip, when you see answers that are both correct. There is something in the question that points to one.
The Answer is in the question.
76. Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?
- C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
- https://aws.amazon.com/sqs/details/
- Priority: Use separate queues to provide prioritization of work.
77. Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?
- B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
- C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
78. Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers
- B. Each subnet maps to a single Availability Zone.
- C. CIDR block mask of/25 is the smallest range supported.
- D. By default, all subnets can route between each other, whether they are private or public.
- B: RIGHT. Each subnet must reside entirely within one Availability Zone and cannot span zones http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
- C: WRONG. You can assign a single CIDR block to a VPC. The allowed block size is between a
/16
netmask and/28
netmask from http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
79. A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated?
- C. Multiple facilities in eu-west-1
- facilities 设备
- http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions
- Amazon S3 achieves high availability by replicating data across multiple servers within Amazon’s data centers.
80. Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances, if an instance fails to pass health checks, which statement will be true?
- D. The ELB stops sending traffic to the instance that failed its health check.
81. In AWS, which security aspects are the customer’s responsibility? Choose 4 answers (Important Again Again yizhe)
- A. Security Group and ACL (Access Control List) settings
- C. Patch management on the EC2 instance’s operating system
- D. Life-cycle management of IAM credentials
- F. Encryption of EBS (Elastic Block Storage) volumes
- https://aws.amazon.com/compliance/shared-responsibility-model/
82. You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make?
- C. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer.
- C is correct. If one AZ fail, then you have 3 instances left on other AZ which translates to 50% capacity. Since the current utilisation is at 45%, in the event of AZ failure you are still able to service the full resources required by your web application. Since there is no autoscaling configured, you then manually replace the failed instances as you would to get the website back to its normal capacity.
89. You have decided to change the instance type for instances running in your application tier that is using Auto Scaling. In which area below would you change the instance type definition?
- D. Auto Scaling launch configuration
- http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html
- if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
90. When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?
- C. Data will be deleted and will no longer be accessible.
- http://www.n2ws.com/how-to-guides/ephemeral-storage-on-ebs-volume.html
- About Emphemeral : Ephemeral storage is the volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. In the case that the instance is stopped or terminated or underlying hardware faces an issue, any data stored on ephemeral storage would be lost. This storage is part of the disk attached to the instance. It turns out to be a fast performing storage solution, but a non persistent one, when compared to EBS backup volumes.
- To be remember: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
- If you are using an EBS-backed instance, you can stop and restart that instance without affecting the data stored in the attached volume. The volume remains attached throughout the stop-start cycle. This enables you to process and store the data on your volume indefinitely, only using the processing and storage resources when required. The data persists on the volume until the volume is deleted explicitly. The physical block storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account.
91. Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. (Choose 2 answers)
- A. Create an IAM Role that allows write access to the DynamoDB table.
- E. Launch an EC2 Instance with the IAM Role included in the launch configuration.
92. When you put objects in Amazon S3, what is the indication that an object was successfully stored?
- A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
- http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
- if you receive a successful response, you can be confident the entire object was stored.
93. What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
- A. Amazon EBS-backed instances can be stopped and restarted.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html
- You can start and stop your Amazon EBS-backed instance using the console or the command line.
- When you stop an instance, the data on any instance store volumes is erased. Therefore, if you have any data on instance store volumes that you want to keep, be sure to back it up to persistent storage.
94. A company wants to implement their website in a virtual private cloud (VPC). The web tier will use an Auto Scaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible. What is the minimum number of subnets that need to be configured in the VPC?
- B. 2
- D. 4
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
- When you launch a DB instance inside a VPC, you can designate whether the DB instance you create has a DNS that resolves to a public IP address by using the PubliclyAccessible parameter. This parameter lets you designate whether there is public access to the DB instance. Note that access to the DB instance is ultimately controlled by the security group it uses, and that public access is not permitted if the security group assigned to the DB instance does not permit it.
- yizhe以我认为,2 Public Subnets in VPC but the security group of DB Instance should be set to private cidre.
95. You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an internet gateway is attached to the VPC, and the public route table is configured to send all Internet-based traffic to the Internet gateway. The instance security group is set to allow all outbound traffic but cannot access the internet. Why is the Internet unreachable from this instance?
- A. The instance does not have a public IP address.
- https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html
- To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The Internet gateway logically provides the one-to-one NAT on behalf of your instance, so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or Elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC.
96. You launch an Amazon EC2 instance without an assigned AWS identity and Access Management (IAM) role. Later, you decide that the instance should be running with an IAM role. Which action must you take in order to have a running Amazon EC2 instance with an IAM role assigned to it?
- D. Create an image of the instance, and use this image to launch a new instance with the desired IAM role assigned.
- Creating image is for keeping context of instances. But now, AWS allows to attache role to a running EC2 instance
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
- Specify the role when you launch your instance, or attach the role to a running or stopped instance
97. How can the domain’s zone apex, for example, “myzoneapexdomain.com”, be pointed towards an Elastic Load Balancer?
- A. By using an Amazon Route 53 Alias record
- http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/setting-up-route53-zoneapex-elb.html
- Amazon Route 53 supports the alias resource record set, which lets you map your zone apex (e.g. example.com) DNS name to your load balancer DNS name.
- The record A specifies IPv4 address for given host.
- The record AAAA (also quad-A record) specifies IPv6 address
- The CNAME record specifies a domain name that has to be queried in order to resolve the original DNS query. Therefore CNAME records are used for creating aliases of domain names. CNAME records are truly useful when we want to alias our domain to an external domain. In other cases we can remove CNAME records and replace them with A records and even decrease performance overhead.
- More info : http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
98. An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance?
- B. The outbound network ACL needs to be modified to allow outbound traffic.
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
- ACL is stateless.
- Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
99. For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers
- B. Managing a multi-step and multi-decision checkout process of an e-commerce website
- C. Orchestrating the execution of distributed and auditable business processes
- http://www.aiotestking.com/amazon/which-of-the-following-use-cases-are-simple-workflow-service-swf-and-amazon-ec2-an-appropriate-solution/
- A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors
- This is far more applicable scenario for a Kinesis stream. Have the sensors send data into the stream, then process out of the stream (e.g. with a Lambda function to upload to DynamoDb for further analysis, or into CloudWatch if you just wanted to plot the data from the sensors as a time series).
- B. Managing a multi-step and multi-decision checkout process of an e-commerce website
- Ideal scenario for SWF. Track the progress of the checkout process as it proceeds through the multiple steps.
- C. Orchestrating the execution of distributed and auditable business processes
- Also good for SWF. The key words in the question are “process” and “distributed”. If you’ve got multiple components involved the process, and you need to keep them all appraised of what the current state/stage in the process is, SWF can help.
- D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs
- This is a potential scenario for Lambda, which can take an SNS notification as a triggering event. Lambda kicks off the transcoding job (or drops the piece of work into an SQS queue that workers pull from to kick off the transcoding job)
- E. Using as a distributed session store for your web application
- Not applicable for SWF at all. As for how you might want to do this, key word here is “distributed”. If you wanted to store session state data for a web session on a single web server, just throw it into scratch space on the instance (e.g. ephmeral/instance-store drive mounted to the instance). But this is “distributed”, meaning multiple web instances are in play. If one instance fails, you want session state to still be maintained when the user’s traffic traverses a different web server. (It wouldn’t be acceptable for them to have two items in their shopping cart, be ready to check out, have the instance they were on fail, their traffic go to another web instance, and their shopping cart suddenly shows up as empty.) So you save their session state off to an external session store. If the session state only needs to be maintained for, say, 24 hours, ElastiCache is a good solution. If the session state needs to be maintained for a long period of time, store it in DynamoDb.
- https://aws.amazon.com/swf/faqs/
- Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks.
100. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named “companybackup”?
- D. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.
- https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
- If you’re still unsure of which to use, consider which audit question is most important to you:
- If you’re more interested in “What can this user do in AWS?” then IAM policies are probably the way to go. You can easily answer this by looking up an IAM user and then examining their IAM policies to see what rights they have.
- If you’re more interested in “Who can access this S3 bucket?” then S3 bucket policies will likely suit you better. You can easily answer this by looking up a bucket and examining the bucket policy.
101. A client application requires operating system privileges on a relational database server. What is an appropriate configuration for a highly available database architecture?
- D. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones
- The question never asks fro RDS. It just says a relational database server, which can be mysql on its own not hosted with AWS.
102. What is a placement group?
B. A feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
- B. A feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
- A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
103. A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?
- D. SQS helps to facilitate horizontal scaling of encoding tasks.
- http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/throughput.html
- Because you access Amazon SQS through an HTTP request-response protocol, the request latency (the time interval between initiating a request and receiving a response) limits the throughput that you can achieve from a single thread over a single connection. For example, if the latency from an Amazon Elastic Compute Cloud (Amazon EC2) based client to Amazon SQS in the same region averages around 20 ms, the maximum throughput from a single thread over a single connection will average 50 operations per second.
104. When creation of an EBS snapshot is initiated, but not completed, the EBS volume:
- A. Can be used while the snapshot is in progress.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
- Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is
pending
until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
105. What are characteristics of Amazon S3? Choose 2 answers
- C. S3 allows you to store unlimited amounts of data.
- E. Objects are directly accessible via a URL.
- https://aws.amazon.com/s3/faqs/
- Q: How much data can I store?The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
- Object Size Limit Now 5 TB
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
- Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket.
106. Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
- D. May be performed by the customer on their own instances with prior authorization from AWS.
- https://aws.amazon.com/security/penetration-testing/
- Our Acceptable Use Policy describes permitted and prohibited behavior on AWS and includes descriptions of prohibited security violations and network abuse. However, because penetration testing and other simulated events are frequently indistinguishable from these activities, we have established a policy for customers to request permission to conduct penetration tests and vulnerability scans to or originating from the AWS environment.
107. You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier. The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method of getting the data into Amazon Glacier?
- D. AWS Import/Export
- http://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
- AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network, bypassing the Internet.
108. How can you secure data at rest on an EBS volume?
- E. Use an encrypted file system on top of the EBS volume.
- https://d0.awsstatic.com/whitepapers/AWS_Securing_Data_at_Rest_with_Encryption.pdf
- Encryption in EBS:
- Each of these operates below the file system layer using kernel space device drivers to perform encryption and decryption of data.
- Another option would be to use file system-level encryption, which works by stacking an encrypted file system on top of an existing file system.
109. Which approach below provides the least impact to provisioned throughput on the “Product” table?
D. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item
for each image
for each image
Yizhe why????
110. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
- B. Enable access logs on the load balancer.
- http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html
- Elastic Load Balancing access logs,
The access logs for Elastic Load Balancing capture detailed information for requests made to your load balancer and stores them as log files in the Amazon S3 bucket that you specify. Each log contains details such as the time a request was received, the client’s IP address, latencies, request path, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot your back-end applications.
111. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should:
- C. Launch the instances in the Amazon Virtual Private Cloud (VPC).
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
112. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers
- A. Set permissions on the object to public read during upload.
- C. Configure the bucket policy to set all objects to public read.
- http://docs.aws.amazon.com/AmazonS3/latest/UG/UploadingObjectsintoAmazonS3.html
- Set permission during upload
- You can use ACLs to grant permissions to individual AWS accounts; however, it is strongly recommended that you do not grant public access to your bucket using an ACL.
- http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
- Configure bucket policy to set permissions
113. Can I use Provisioned IOPS with RDS?
- D. Yes for all RDS instances
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
- Amazon RDS provides three storage types: magnetic, General Purpose (SSD), and Provisioned IOPS (input/output operations per second).
114. A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers
- A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.
- B. Use Amazon S3 server-side encryption with customer-provided keys.
- E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
- http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
- Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
- Use Server-Side Encryption with Customer-Provided Keys (SSE-C)
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
- The Amazon S3 encryption client locally generates a one-time-use symmetric key (also known as a data encryption key or data key). It uses this data key to encrypt the data of a single S3 object (for each object, the client generates a separate data key).
- The client encrypts the data encryption key using the master key you provide.The client uploads the encrypted data key and its material description as part of the object metadata. The material description helps the client later determine which client-side master key to use for decryption (when you download the object, the client decrypts it).
- The client then uploads the encrypted data to Amazon S3 and also saves the encrypted data key as object metadata (
x-amz-meta-x-amz-key
) in Amazon S3 by default.
115. Which procedure for backing up a relational database on EC2 that is using a set of RAlDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?
A. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes
B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes
C. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk I/O
D. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk I/O
E. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O
Mostly, it’s D. per cloud guru.
116. After creating a new IAM user which of the following must be done before they can successfully make API calls?
- D. Create a set of Access Keys for the user.
- http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
Programmatic access: If the user needs to make API calls or use the AWS CLI or the Tools for Windows PowerShell, create an access key (an access key ID and a secret access key) for that user.
117. Which of the following are valid statements about Amazon S3? Choose 2 answers
- C. A successful response to a PUT request only occurs when a complete object is saved.
- E. S3 provides eventual consistency for overwrite PUTS and DELETES.
- http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel
118. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
- D. Amazon DynamoDB
- https://aws.amazon.com/dynamodb/faqs/
- Q: When should I use Amazon DynamoDB vs Amazon S3?
- Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
- Q: How does Amazon DynamoDB achieve high uptime and durability?
- To achieve high uptime and durability, Amazon DynamoDB synchronously replicates data across three facilities within an AWS Region.
119. Which features can be used to restrict access to data in S3? Choose 2 answers
- A. Set an S3 ACL on the bucket or the object.
- C. Set an S3 bucket policy.
- http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
- Updating Amazon S3 Bucket Policies
- Updating Amazon S3 ACLs
120. Which of the following are characteristics of a reserved instance? Choose 3 answers
- A. It can be migrated across Availability Zones
- B. It is specific to an Amazon Machine Image (AMI)
- C. It can be applied to instances launched by Auto Scaling
- D. It is specific to an instance Type (When you purchase, you must select an instance type, )
- E. It can be used to lower Total Cost of Ownership (TCO) of a system
- You can use Auto Scaling or other AWS services to launch the On-Demand Instances that use your Reserved Instance benefits.
- https://media.amazonwebservices.com/AWS_TCO_Web_Applications.pdf
- When you are comparing TCO, we highly recommend that you use the Reserved Instance (RI) pricing option in your calculations.
121. Which Amazon Elastic Compute Cloud feature can you query from within the instance to access instance properties?
- C. Instance metadata
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
- Instance Metadata and User Data
122. Which of the following requires a custom CloudWatch metric to monitor?
- A. Memory Utilization of an EC2 instance
- http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.html
123. You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement?
- A. Security Group Inbound Rule: Protocol – TCP. Port Range – 22, Source 72.34.51.100/32
124. A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight?
Choose 2 answers
- D. Use AWS Consolidated Billing to link the divisions’ accounts to a parent corporate account.
- E. Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 ‘Log’ bucket.
- Consolidated Billing feature to consolidate payment for multiple Amazon Web Services (AWS) accounts or multiple Amazon International Services Pvt. Ltd (AISPL) accounts within your organization by designating one of them to be the payer account. With Consolidated Billing, you can see a combined view of AWS charges incurred by all accounts, as well as get a cost report for each individual account associated with your payer account. Consolidated Billing is offered at no additional charge. AWS and AISPL accounts cannot be consolidated together.
- across-access IAM account :This tutorial teaches you how to use a role to delegate access to resources that are in different AWS accounts that you own (Production and Development). You share resources in one account with users in a different account. By setting up cross-account access in this way, you don’t need to create individual IAM users in each account. In addition, users don’t have to sign out of one account and sign into another in order to access resources that are in different AWS accounts. After configuring the role, you see how to use the role from the AWS Management Console, the AWS CLI, and the API.
- C. Create separate VPCs for each division within the corporate IT AWS account : it does not work because all VPCs are under a single AWS Account. So the divisions are not separated !
125. You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?
- A. Remove public read access and use signed URLs with expiry dates.
- http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
- A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This additional information appears in a policy statement, which is based on either a canned policy or a custom policy. The differences between canned and custom policies are explained in the next two sections.
- Note
- You can create some signed URLs using canned policies and create some signed URLs using custom policies for the same distribution.
126. You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
- D. AWS OpsWorks
- https://aws.amazon.com/opsworks/
- AWS OpsWorks is a configuration management service that uses Chef, an automation platform that treats server configurations as code. OpsWorks uses Chef to automate how servers are configured, deployed, and managed across your Amazon Elastic Compute Cloud (Amazon EC2) instances or on-premises compute environments. OpsWorks has two offerings, AWS Opsworks for Chef Automate, and AWS OpsWorks Stacks.
127. An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will: Choose 2 answers
- C. Send an SNS notification, if configured to do so.
- D. Terminate an instance in the AZ which currently has 2 running EC2 instances.
- http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html
- You can use Amazon SNS to set up a notification target to receive notifications when a lifecycle action occurs.
- http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
- Default Termination policy
- Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration.
- Auto Scaling determines which unprotected instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, it terminates it.
- If there are multiple instances that use the oldest launch configuration, Auto Scaling determines which unprotected instances are closest to the next billing hour. (This helps you maximize the use of your EC2 instances while minimizing the number of hours you are billed for Amazon EC2 usage.) If there is one such instance, Auto Scaling terminates it.
- If there is more than one unprotected instance closest to the next billing hour, Auto Scaling selects one of these instances at random.
128. When an EC2 instance that is backed by an S3-based AMI is terminated, what happens to the data on the root volume?
- D. Data is automatically deleted.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
- Any data on the instance store volumes persists as long as the instance is running, but this data is deleted when the instance is terminated (instance store-backed instances do not support the Stopaction) or if it fails (such as if an underlying drive has issues).
129. In order to optimize performance for a compute cluster that requires low inter-node latency, which of the following feature should you use?
- D. Placement Groups
130. You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for
the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?
- B. Assign an Elastic IP address to the fourth instance.
131. You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet your requirements?
- A. Spot Instances
- Looking at 2 key words in the question “most cost-effective way” and “recover gracefully”. Anytime you see “most cost-effective way” immediately think SPOT, then to confirm if it should be spot, check if it can recover as spot instances are pulled out anytime.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html
- The following are the possible reasons that Amazon EC2 will terminate your Spot instances:
- Price—The Spot price is greater than your bid price.
- Capacity—If there are not enough unused EC2 instances to meet the demand for Spot instances, Amazon EC2 terminates Spot instances, starting with those instances with the lowest bid prices. If there are several Spot instances with the same bid price, the order in which the instances are terminated is determined at random.
- Constraints—If your request includes a constraint such as a launch group or an Availability Zone group, these Spot instances are terminated as a group when the constraint can no longer be met.
132. Which of the following are true regarding AWS CloudTrail? Choose 3 answers
- A. CloudTrail is enabled globally
- http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html: A trail can be applied to all regions or a single region. As a best practice, create a trail that applies to all regions in the AWS partitionin which you are working. This is the default setting when you create a trail in the CloudTrail console.
- B. CloudTrail is enabled by default
- C. CloudTrail is enabled on a per-region basis
- C:have multiple single region trails.
- D. CloudTrail is enabled on a per-service basis. You cannot choose service to trail
- You cannot enable CloudTrail per service
- E. Logs can be delivered to a single Amazon S3 bucket for aggregation.
- FAQ: You can configure one S3 bucket as the destination for multiple accounts. For detailed instructions, refer to aggregating log files to a single Amazon S3 bucket section of the AWS CloudTrail User Guide
- F. CloudTrail is enabled for all available services within a region.
- FAQ: Q: What services are supported by CloudTrail?
- AWS CloudTrail records API activity and service events from most AWS services. For the list of supported services, see CloudTrail Supported Services in the CloudTrail User Guide.
- G. Logs can only be processed and delivered to the region in which they are generated.
133. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?
- B. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
- https://aws.amazon.com/cloudfront/faqs/
- Q. How does Amazon CloudFront lower my costs to distribute content over the Internet?
- Like other AWS services, Amazon CloudFront has no minimum commitments and charges you only for what you use. Compared to self-hosting, Amazon CloudFront spares you from the expense and complexity of operating a network of cache servers in multiple sites across the internet and eliminates the need to over-provision capacity in order to serve potential spikes in traffic. Amazon CloudFront also uses techniques such as collapsing simultaneous viewer requests at an edge location for the same file into a single request to your origin server. This reduces the load on your origin servers reducing the need to scale your origin infrastructure, which can bring you further cost savings.
- Additionally, if you are using an AWS origin (e.g., Amazon S3, Amazon EC2, etc.), effective December 1, 2014, we are no longer charging for AWS data transfer out to Amazon CloudFront. This applies to data transfer from all AWS regions to all global CloudFront edge locations.
134. You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer’s DNS name. Which options are probable causes of this behavior? Choose 2 answers
- A. The load balancer was not configured to use a public subnet with an Internet gateway configured
- C. The security groups or network ACLs are not property configured for web traffic.
135. A company needs to deploy services to an AWS region which they have not previously used. The company currently has an AWS identity and Access Management (IAM) role for the Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have the same privileges. How should the company achieve this?
- B. Assign the existing IAM role to the Amazon EC2 instances in the new region
136. Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? Choose 2 answers
- A. Email
- D. Short Message Service
- FAQ Q: What are the different delivery formats/transports for receiving notifications?
In order for customers to have broad flexibility of delivery mechanisms, Amazon SNS supports notifications over multiple transport protocols. Customers can select one the following transports as part of the subscription requests:
- “HTTP”, “HTTPS” – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
- ”Email”, “Email-JSON” – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
- “SQS” – Users can specify an SQS standard queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.). Note that FIFO queues are not currently supported.
- “SMS” – Messages are sent to registered phone numbers as SMS text messages.
137. Which set of Amazon S3 features helps to prevent and recover from accidental data loss?
- B. Object versioning and Multi-factor authentication
- https://aws.amazon.com/s3/faqs/
- Q: How does Versioning protect me from accidental deletion of my objects?When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. You can set Lifecycle rules to manage the lifetime and the cost of storing multiple versions of your objects.
138. A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
- B. Amazon CloudWatch
- E. Amazon Simple Notification Service
139. A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? (Select 2) Choose 2 answers
- A. AWS Directory Service AD Connector
- C. AWS Identity and Access Management groups
- D. AWS identity and Access Management roles
- https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/
- AD Connector is designed to give you an easy way to establish a trusted relationship between your Active Directory and AWS. When AD Connector is configured, the trust allows you to:
- Sign in to AWS applications such as Amazon WorkSpaces, Amazon WorkDocs, and Amazon WorkMail by using your Active Directory credentials.
- Seamlessly join Windows instances to your Active Directory domain either through the Amazon EC2 launch wizard or programmatically through the EC2 Simple System Manager (SSM) API.
- Provide federated sign-in to the AWS Management Console by mapping Active Directory identities to AWS Identity and Access Management (IAM) roles.
140. You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
- A. Amazon DynamoDB (for data store purpose)
141. The Trusted Advisor service provides insight regarding which four categories of an AWS account?
- C. Performance, cost optimization, security, and fault tolerance
142. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
- A. Amazon Kinesis
- https://aws.amazon.com/streaming-data/
- Streaming Data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes (order of Kilobytes). Streaming data includes a wide variety of data such as log files generated by customers using your mobile or web applications, ecommerce purchases, in-game player activity, information from social networks, financial trading floors, or geospatial services, and telemetry from connected devices or instrumentation in data centers.
143. A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations?
- D. Web Identity Federation
- http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
- You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences:
- Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.
- Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.
144. You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?
- B. Use Amazon S3 multipart upload
145. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?
- B. Enable server access logging for all required Amazon S3 buckets.
- CloudTrail cannot track S3 bucket access.
- CloudTrail provides visibility into user activity by recording API calls made on your account. CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards. For more details, refer to the AWS compliance white paper “Security at scale: Logging in AWS”.
146. A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing lower Overall CPU resources for the web tier?
- A.Amazon EBS volumeB.Amazon S3C.Amazon EC2 instance storeD.Amazon RDS instance
147. You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?
- A. Use multi-part upload.
- B. Add a random prefix to the key names.
- C. Amazon S3 will automatically manage performance at this scale.
- D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names
- http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
- If you anticipate that your workload will consistently exceed 100 requests per second, you should avoid sequential key names. If you must use sequential numbers or date and time patterns in key names, add a random prefix to the key name. The randomness of the prefix more evenly distributes key names across multiple index partitions. Examples of introducing randomness are provided later in this topic.
148. When will you incur costs with an Elastic IP address (EIP)?
- A. When an EIP is allocated.
- B. When it is allocated and associated with a running instance.
- C. When it is allocated and associated with a stopped instance.
- D. Costs are incurred regardless of whether the EIP is associated with a running instance.
149. A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the following options helps the company accomplish this?
- A. Create a new peering connection Between Prod and Dev along with appropriate routes.
- B. Create a new entry to Prod in the Dev route table using the peering connection as the target.
- C. Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target.
- D. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.
- http://jayendra-patil.blogspot.fr/2016/03/aws-vpc-peering.html
- VPC peering connection cannot be created between VPCs that have matching or overlapping CIDR blocks
- VPC peering connection cannot be created between VPCs in different regions.
- VPC peering connection are limited on the number active and pending VPC peering connections that you can have per VPC.
- VPC peering does not support transitive peering relationships
- VPC peering does not support Edge to Edge Routing Through a Gateway or Private Connection
- Only one VPC peering connection can be established between the same two VPCs at the same time
- Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.
- A placement group can span peered VPCs; however, you do not get full-bisection bandwidth between instances in peered VPCs.
- Unicast reverse path forwarding in VPC peering connections is not supported.
- Instance’s public DNS hostname does not resolve to its private IP address across peered VPCs.
150. A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer?
- A. Create an A record pointing to the IP address of the load balancer
- B. Create a CNAME record pointing to the load balancer DNS name.
- C. Create a CNAME record aliased to the load balancer DNS name.
- D. Create an A record aliased to the load balancer DNS name
- http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html#CNAMEFormat
- The DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for http://www.example.com, newproduct.example.com, and so on.
151. You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: “Network error: Connection timed out” or “Error connecting to [instance], reason: -> Connection timed out: connect,” You have confirmed that the network and security group rules are configured correctly and the instance is
passing status checks. What steps should you take to identify the source of the behavior? Choose 2 answers
- A. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch.
- B. Verify that your IAM user policy has permission to launch Amazon EC2 instances.
- C. Verify that you are connecting with the appropriate user name for your AMI.
- D. Verify that the Amazon EC2 Instance was launched with the proper IAM role.
- E. Verify that your federation trust to AWS has been established
152. A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement?
- A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC.
- B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere.
- C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses.
- D. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.
153. A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?
- A. Gateway-Cached volumes with snapshots scheduled to Amazon S3
- B. Gateway-Stored volumes with snapshots scheduled to Amazon S3
- C. Gateway-Virtual Tape Library with snapshots to Amazon S3
- D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier
-
Q: What is the relation between the volume gateway and previously available gateway-cached and gateway-stored modes?The volume gateway represents the family of gateways that support block-based volumes, previously referred to as gateway-cached and gateway-stored modes.In the cached volume mode, your data is stored in Amazon S3 and a cache of the frequently accessed data is maintained locally by the gateway. With this mode, you can achieve cost savings on primary storage, and minimize the need to scale your storage on-premises, while retaining low-latency access to your most used data.In the stored volume mode, data is stored on your local storage with volumes backed up asynchronously as Amazon EBS snapshots stored in Amazon S3. This provides durable and inexpensive off-site backups. You can recover these backups locally to your gateway or in-cloud to Amazon EC2, for example, if you need replacement capacity for disaster recovery.
154. You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?
- A. Multiple Amazon EBS volume with snapshots
- B. A single Amazon Glacier vault
- C. A single Amazon S3 bucket
- D. Multiple instance stores
- cannot be B because the retrieval time is too large.
155. You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this?
- A. User data
- B. EC2Config service
- C. IAM roles
- D. AWS Config
User Data and Shell Scripts
If you are familiar with shell scripting, this is the easiest and most complete way to send instructions to an instance at launch, and the
cloud-init
output log file (/var/log/cloud-init-output.log
) captures console output so it is easy to debug your scripts following a launch if the instance does not behave the way you intended.
Important
User data scripts and
cloud-init
directives only run during the first boot cycle when an instance is launched.156. Which of the following services natively encrypts data at rest within an AWS region? Choose 2 answers
- A. AWS Storage Gateway
- B. Amazon DynamoDB
- C. Amazon CloudFront
- D. Amazon Glacier
- E. Amazon Simple Queue Service
- AWS services support data encrypted at rest:
-S3
-Glacier
-RDS
-EBS
-EMR
-Redshift
-storage gateway
- AWS services support data encrypted at rest:
157. A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure mat AWS credentials (i.e., Access Key ID/Secret Access Key combination) are not compromised?
- A. Enable Multi-Factor Authentication for your AWS root account.
- B. Assign an IAM role to the Amazon EC2 instance.
- C. Store the AWS Access Key ID/Secret Access Key combination in software comments.
- D. Assign an IAM user to the Amazon EC2 Instance.
- http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-roles-with-ec2
- Applications that run on an Amazon EC2 instance need credentials in order to access other AWS services. To provide credentials to the application in a secure way, use IAM roles. A role is an entity that has its own set of permissions, but that isn’t a user or group. Roles also don’t have their own permanent set of credentials the way IAM users do. In the case of Amazon EC2, IAM dynamically provides temporary credentials to the EC2 instance, and these credentials are automatically rotated for you.
158. Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers
- A. Supported on all Amazon EBS volume types
- B. Snapshots are automatically encrypted
- C. Available to all instance types
- D. Existing volumes can be encrypted
- E. shared volumes can be encrypted
- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
- This feature is supported with all EBS volume types
- Snapshots that are taken from encrypted volumes are automatically encrypted
- Amazon EBS encryption is only available on certain instance types
- You cannot change the CMK that is associated with an existing snapshot or encrypted volume
- Public snapshots of encrypted volumes are not supported, but you can share an encrypted snapshot with specific accounts
159. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?
- A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
- B. Amazon RDS for MySQL with Multi-AZ
- C. Amazon ElastiCache
- D. Amazon DynamoDB
- Relational DB & multi AZ
160. A t2.medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)?
- A. An Instance store Hardware Virtual Machine AMI
- B. An Instance store Paravirtual AMI
- C. An Amazon EBS-backed Hardware Virtual Machine AMI
- D. An Amazon EBS-backed Paravirtual AMI
- https://aws.amazon.com/amazon-linux-ami/instance-type-matrix/
Instance Family HVM
EBS-Backed
64-bitHVM
Instance Store
64-bitPV
EBS-Backed
64-bitPV
Instance Store
64-bitT2 ✓ M4 ✓ M3 ✓ ✓ ✓ ✓ C4 ✓ C3 ✓ ✓ ✓ ✓ X1 ✓ ✓ R4 ✓ R3 ✓ ✓ P2 ✓ G2 ✓ I2 ✓ ✓ D2 ✓ ✓
161. You manually launch a NAT AMI in a public subnet. The network is properly configured. Security groups and network access control lists are property configured. Instances in a private subnet can access the NAT. ThevNAT can access the Internet. However, private instances cannot access the Internet. What additional step is
required to allow access from the private instances?
- A. Enable Source/Destination Check on the private Instances.
- B. Enable Source/Destination Check on the NAT instance.
- C. Disable Source/Destination Check on the private instances.
- D. Disable Source/Destination Check on the NAT instance.
162. Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?
- A. Maintain two snapshots: the original snapshot and the latest incremental snapshot.
- B. Maintain a volume snapshot; subsequent snapshots will overwrite one another
- C. Maintain a single snapshot the latest snapshot is both Incremental and complete.
- D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.
- “If you make periodic snapshots of a volume, the snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.”
“When you delete a snapshot, only the data exclusive to that snapshot is removed. Deleting previous snapshots of a volume do not affect your ability to restore volumes from later snapshots of that volume.”
163. An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume?
- A. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Remount the Amazon EBS volume.
- B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.
- C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume.
- D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume
- Changing the Encryption State of Your Data
- There is no direct way to encrypt an existing unencrypted volume, or to remove encryption from an encrypted volume. However, you can migrate data between encrypted and unencrypted volumes. You can also apply a new encryption status while copying a snapshot:
- While copying an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted.
- While copying an encrypted snapshot of an encrypted volume, you can re-encrypt the copy using a different CMK. Volumes restored from the encrypted copy will only be accessible using the newly applied CMK.
- Steps :
- Create your destination volume (encrypted or unencrypted, depending on your need) by following the procedures in Creating an Amazon EBS Volume.
- Attach the destination volume to the instance that hosts the data to migrate. For more information, see Attaching an Amazon EBS Volume to an Instance.
- Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at
/mnt/destination
and mount the destination volume there. - Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.
164. A US-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents?
- A. Use a public-facing load balancer per region to load-balance web traffic, and enable HTTP health checks.
- B. Use a public-facing load balancer per region to load-balance web traffic, and enable sticky sessions.
- C. Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions.
- D. Use Amazon Route 53, and apply a weighted routing policy to distribute traffic across both regions.
- https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
- Routing Policy :
- Simple Routing Policy
Use a simple routing policy when you have a single resource that performs a given function for your domain
- Weighted Routing Policy
Use the weighted routing policy when you have multiple resources that perform the same function (for example, web servers that serve the same website) and you want Amazon Route 53 to route traffic to those resources in proportions that you specify (for example, one quarter to one server and three quarters to the other).
- Latency Routing Policy
Use the latency routing policy when you have resources in multiple Amazon EC2 data centers that perform the same function and you want Amazon Route 53 to respond to DNS queries with the resources that provide the best latency
- Failover Routing Policy
Use the failover routing policy when you want to configure active-passive failover, in which one resource takes all traffic when it’s available and the other resource takes all traffic when the first resource isn’t available.
- Geolocation Routing Policy
Use the geolocation routing policy when you want Amazon Route 53 to respond to DNS queries based on the location of your users. For more information about geolocation resource record sets
165. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
- A. Storing BLOB data.
- B. Managing web sessions.
- C. Storing JSON documents.
- D. Storing metadata for Amazon S3 objects.
- E. Running relational joins and complex updates.
- F. Storing large amounts of infrequently accessed data.
- Q: When should I use Amazon DynamoDB vs Amazon S3?
- Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services,
- Amazon S3 : large objects or infrequently accessed data sets should be stored in
- DynamoDB : smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved
- Q: What is a document store?
- A document store provides support for storing, querying and updating items in a document format such as JSON, XML, and HTML.
166. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? Choose 3 answers
- A. Use a HTTPS GET to the Amazon S3 bucket where the files are located.
- B. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
- C. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
- D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
- E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.
- F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot.
- A : FAQ: When I look in Amazon S3 why can’t I see my volume data?Your volumes are stored in Amazon S3 and accessible through AWS Storage Gateway. You cannot directly access them by using Amazon S3 API actions. You can take point-in-time snapshots of gateway volumes that are made available in the form of Amazon EBS snapshots. Use the file interface to work with your data natively in S3.
- B : To File Gateway !!!! FAQ: Can I use versioning, lifecycle, cross-region replication and S3 event notification?Yes. Your bucket policies for lifecycle management and cross-region replication apply directly to objects stored in your bucket through AWS Storage Gateway.You can use S3 lifecycle policies to change an object’s storage tier or delete old objects or object versions. Note that, in the case of objects deleted by lifecycle policy, you will still need to delete these objects from the gateway itself using standard filesystem tools (such as the rmcommand). An object that needs to be accessed by using a file share should only be managed by the gateway. If you directly overwrite or update an object previously written by file gateway, it results in undefined behavior when the object is accessed through the file share.
- Q: Why would I use snapshots?You can take point-in-time snapshots of your volume gateway volumes in the form of Amazon EBS snapshots.You can use a snapshot of your volume as the starting point for a new Amazon EBS volume, which you can then attach to an Amazon EC2 instance. Using this approach, you can easily supply data from your on-premises applications to your applications running on Amazon EC2 if you require additional on-demand compute capacity for data processing or replacement capacity for disaster recovery purposes.For cached volumes, where your volume data is already stored in Amazon S3, you can use snapshots to preserve versions of your data. Using this approach, you can revert to a prior version when required or repurpose a point-in-time version as a new volume. You can initiate snapshots on a scheduled or ad hoc basis. When taking a new snapshot, only the data that has changed since your last snapshot is stored. If you have a volume with 100 GB of data, but only 5 GB of data have changed since your last snapshot, only the 5 additional GB of snapshot data will be stored in Amazon S3. When you delete a snapshot, only the data not needed for any other snapshot is removed.For stored volumes, where your volume data is stored on-premises, snapshots provide durable, off-site backups in Amazon S3. You can create a new volume from a snapshot if you need to recover a backup. You can also use a snapshot of your volume as the starting point for a new Amazon EBS volume which you can then attach to an Amazon EC2 instance.
- F: Q: How do I restore a snapshot to a gateway?Each snapshot is given a unique identifier that you can view using the AWS Management Console. You can create AWS Storage Gateway or Amazon EBS volumes based on any of your existing snapshots by specifying this unique identifier.Using the AWS Management Console, you can create a new volume from a snapshot you’ve stored in Amazon S3. You can then mount this volume as an iSCSI device to your on-premises application server.Because cached volumes store your primary data in Amazon S3, when creating a new volume from a snapshot, your gateway keeps the snapshot data in Amazon S3 where it becomes the primary data for your new volume.Because stored volumes store your primary data locally, when creating a new volume from a snapshot, your gateway downloads the data contained within the snapshot to your local hardware. There it becomes the primary data for your new volume.
167. A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? Choose 2 answers
- A. Establish a hardware VPN over the internet between VPC-2 ana the on-premises network.
- B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network.
- C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1.
- E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
- VPC Peering Basics :
- Etablish : Initiating-request -> Pending-acceptance -> Provisioning ->Active
- Pricing: The charges for transferring data within a VPC peering connection are the same as the charges for transferring data across Availability Zones.
- Limitations:
- You cannot create a VPC peering connection between VPCs that have matching or overlapping IPv4 or IPv6 CIDR blocks.
- You cannot create a VPC peering connection between VPCs in different regions.
- You have a limit on the number active and pending VPC peering connections that you can have per VPC.
- VPC peering does not support transitive peering relationships
- You cannot have more than one VPC peering connection between the same two VPCs at the same time.
- A placement group can span peered VPCs; however, you do not get full-bisection bandwidth between instances in peered VPCs.
- Unicast reverse path forwarding in VPC peering connections is not supported.
- You can enable resources on either side of a VPC peering connection to communicate with each other over IPv6; however, IPv6 communication is not automatic. You must associate an IPv6 CIDR block with each VPC, enable the instances in the VPCs for IPv6 communication, and add routes to your route tables that route IPv6 traffic intended for the peer VPC to the VPC peering connection.
168. What is the minimum time Interval for the data that Amazon CloudWatch receives and aggregates?
- A. One second
- B. Five seconds
- C. One minute
- D. Three minutes
- E. Five minutes
- Q: What is the minimum granularity for the data that Amazon CloudWatch receives and aggregates?The minimum granularity supported by CloudWatch is 1 minute data points. Many metrics are received and aggregated at 1-minute intervals. Some are received at 3-minute or 5-minute intervals.Depending on the age of data requested, metrics will be available in the granularity defined in the retention schedules defined above. For example, if you request for 1 minute data for a day from 10 days ago, you will receive the 1440 data points. However, if you request for 1 minute data from 5 months back, the UI will automatically change the granularity to 1 hour and the GetMetricStatistics API will not return any output.
169. Which of the following statements are true about Amazon Route 53 resource records? Choose 2 answers
- A. An Alias record can map one DNS name to another Amazon Route 53 DNS name.
- B. A CNAME record can be created for your zone apex.
- C. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere.
- D. TTL can be set for an Alias record in Amazon Route 53.
- E. An Amazon Route 53 Alias record can point to any DNS record hosted anywhere.
- http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
- An alias resource record set contains a pointer to a CloudFront distribution, an Elastic Beanstalk environment, an ELB Classic or Application Load Balancer, an Amazon S3 bucket that is configured as a static website, or another Amazon Route 53 resource record set in the same hosted zone. When Amazon Route 53 receives a DNS query that matches the name and type in an alias resource record set, Amazon Route 53 follows the pointer and responds with the applicable value:
- A CNAME record can point to any DNS record hosted anywhere, including to the resource record set that Amazon Route 53 automatically creates when you create a policy record.
- If an alias resource record set points to another resource record set in the same hosted zone, Amazon Route 53 uses the TTL of the resource record set that the alias resource record set points to.
- Alias records work like a CNAME record in that you can map one DNS name (example.com) to another ‘target’ DNS name (elb1234.elb.amazonaws.com).
170. A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast.The database tier uses shared-storage clustering to provide database fall over capability and uses several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly to off-site tapes Which AWS storage and database architecture meets the requirements of the application?
- A. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time App servers share state using a combination or DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils(AMIs) database backed up via DB snapshots.
- B. Web servers store -read-only data in S3, and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more Read Replicas Backup web servers app servers, and database backed up weekly to Glacier using snapshots.
- C. Web servers store read-only data In S3 and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment Backup web and app servers backed up weekly via AMIs. database backed up via DB snapshots.
- D. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time App servers share state using a combination of DynamoDB and IP multicast Database use RDS with multl-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB snapshots
171. Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database Which backup architecture will meet these requirements?
- A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore
- B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
- C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore
- D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.
- Glacier typically complete a restore in 3-5 hours
172. Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and US. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements’?
- A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
- B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
- C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
- D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
- E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
- https://aws.amazon.com/blogs/aws/cross-region-read-replicas-for-amazon-rds-for-mysql/
- You can now create cross-region read replicas for Amazon RDS database instances!
173. A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?
- A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the onpremises database and a Hadoop cluster on AWS.
- B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
- C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
- D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline
- Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by subsequent steps in your Amazon EMR cluster. You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3. S3DistCp is more scalable and efficient for parallel copying large numbers of objects across buckets and across AWS accounts.
- Using Dynamo and EMR still writes to the on-premise database but just costs 3-4 times as much, your logic is highly flawed.
174. Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDB table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. what is the best approach for storing data to DynamoDB and S3?
- A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.
- B. Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.
- C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket.
- D. Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
- Because “existing social media account” is corresponding to web identity federation
175. Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?
- A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
- B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database.
- C. Amazon ElastiCache to store the writes until the writes are committed to the database.
- D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
- http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughput.html
- If your application’s read or write requests exceed the provisioned throughput for a table, DynamoDB might throttlethat request. When this happens, the request fails with an HTTP 400 code (
Bad Request
), accompanied by aProvisionedThroughputExceededException
. The AWS SDKs have built-in support for retrying throttled requests. For more information, see Error Retries and Exponential Backoff.
176. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write performance Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 lOPs like the original four for a total of 24.000 IOPS on the EC2instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?
- A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
- B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput.
- C. Small block sizes cause performance degradation, limiting the I’O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput.
- D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS.
- E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html
- An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.
177. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the backend You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling Which setup win meet the requirements?
- A. Add an SOS queue to the ingestion layer to buffer writes to the RDS instance
- B. Ingest data into a DynamoDB table and move old data to a Redshift cluster
- C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
- D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
- Amazon RDS is a managed relational database service that provides you six familiar database engines to choose from, including Amazon Aurora, MySQL, MariaDB, Oracle, Microsoft SQL Server, and PostgreSQL. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS handles routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair.
- Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds.
- An Amazon Redshift data warehouse cluster can contain from 1-128 compute nodes, depending on the node type
- Q: When would I use Amazon Redshift vs. Amazon EMR?
You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with big data processing frameworks such as Apache Spark, Hadoop, Presto, or Hbase. Amazon EMR gives you full control over the configuration of your clusters and the software you install on them.Data warehouses like Amazon Redshift are designed for a different type of analytics altogether. Data warehouses are designed to pull together data from lots of different sources, like inventory, financial, and retail sales systems. In order to ensure that reporting is consistently accurate across the entire company, data warehouses store data in a highly structured fashion. This structure builds data consistency rules directly into the tables of the database.Amazon Redshift is the best service to use when you need to perform complex queries on massive collections of structured data and get superfast performance.
178. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform?
- A. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
- B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
- C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
- D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
- https://aws.amazon.com/about-aws/whats-new/2014/02/20/analyze-streaming-data-from-amazon-kinesis-with-amazon-elastic-mapreduce/
- Amazon Elastic MapReduce (Amazon EMR) Connector to Amazon Kinesis.
- Kinesis can collect data from hundreds of thousands of sources, such as web site click-streams, marketing and financial information, manufacturing instrumentation, social media and more. This connector enables batch processing of data in Kinesis streams with familiar Hadoop ecosystem tools such as Hive, Pig, Cascading, and standard MapReduce. You can now analyze data in Kinesis streams without having to write, deploy and maintain any independent stream processing applications.
- You can use this connector, for example, to write a SQL query using Hive against a Kinesis stream or to build reports that join and process Kinesis stream data with multiple data sources such as Amazon Dynamo DB, Amazon S3 and HDFS. You can build reliable and scalable ETL processes that filter and archive Kinesis data into permanent data stores including Amazon S3, Amazon DynamoDB, or Amazon Redshift.To facilitate end-to-end log processing scenarios using Kinesis and EMR, we have created a Log4J Appender that streams log events directly into a Kinesis stream, making the log entries available for processing in EMR. You can get started today by launching a new EMR cluster and using the code samples provided in the tutorials and FAQs. If you’re new to Kinesis you can learn more by visiting the Kinesis detail page.
179. You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible?
- A. Use RDS Multi-AZ with two tables, one for -Active calls” and one for -Terminated calls”. In this way the “Active calls_ table is always small and effective to access.
- B. Use DynamoDB with a “Calls” table and a Global Secondary Index on a “IsActive’” attribute that is present for active calls only In this way the Global Secondary index is sparse and more effective.
- C. Use DynamoDB with a ‘Calls” table and a Global secondary index on a ‘State” attribute that can equal to “active” or “terminated” in this way the Global Secondary index can be used for all Items in the table.
- D. Use RDS Multi-AZ with a “CALLS” table and an Indexed “STATE* field that can be equal to ‘ACTIVE” or – TERMINATED” In this way the SOL query Is optimized by the use of the Index.
- Sparse index : http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForGSI.html
180. A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum. What AWS architecture would you recommend?
- A. ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the ‘username’ Policy variable.
- B. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer.
- C. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance.
- D. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer.
- IAM Policies is limited to 1000
- For B, create bucket for each customer, but 100 buckets per AWS Account
181. You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS in addition, the storage layer must be able to survive the loss of an individual disk. EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB.Which of the following designs will meet these objectives’?
- A. Instantiate an 12 8xlarge instance in us-east-1a Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance Provision 3×1 TB EBS volumes attach them to the instance and configure them as a second RAID 0 volume Configure synchronous, block-level replication from the ephemeralbacked volume to the EBS-backed volume.
- B. Instantiate an 12 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral disks provide with the Instance Configure synchronous block-level replication to an Identically configured Instance in us-east-1b.
- C. Instantiate a c3 8xlarge Instance In us-east-1 Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100 000 lOPS Attach the volume to the instance.
- D. Instantiate a c3 8xlarge instance in us-east-i provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume Ensure that EBS snapshots are performed every 15 minutes.
- E. Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and configure them as a single RAID 0 volume Ensure that EBS snapshots are performed every 15 minutes.
- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
- RAID 0
When I/O performance is more important than fault tolerance, as in a heavily used database (where data replication is already set up separately).
I/O is distributed across the volumes in a stripe. If you add a volume, you get the straight addition of throughput.
Performance of the stripe is limited to the worst performing volume in the set. Loss of a single volume results in a complete data loss for the array. - RAID 1
When fault tolerance is more important than I/O performance, as in a critical application.
Safer from the standpoint of data durability.
Does not provide a write performance improvement; requires more Amazon EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously. - RAID 5 and RAID 6 are not recommended for Amazon EBS because consuming some of the IOPS available to your volumes. Provide 20-30% fewer usable IOPS than a RAID 0 configuration, Increased cost
- RAID 0
- So RAID 0 doesn’t satisfy “survive the loss of an individual disk”
- Instance Store(ephemeral Disk) : data is lost
- The underlying disk drive fails
- The instance stops
- The instance terminates
- If you create an AMI from an instance, the data on its instance store volumes isn’t preserved and isn’t present on the instance store volumes of the instances that you launch from the AMI.
- For at least 100,000IOPS, EBS is not enough because EBS provisioned IOPS SSD max/volume 20,000 ; Max/Instance 75, 000; throughput 1,750MB/s see link
- Instantiate an 12 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral
disks provide with the Instance Configure synchronous block-level replication to an Identically configured
Instance in us-east-1b.
182. You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)
- A. Route 53 Record Sets
- B. IM1 Roles
- C. Elastic IP Addresses (EIP)
- D. EC2 Key Pairs
- E. Launch configurations
- F. Security Groups
183. Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?
- A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
- B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
- C. d A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
- D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances In each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.
- The problem with B is that when you promote a Read Replica, the DB instance will be rebooted before it becomes available. This causes a downtime so does not satisfy the high availability requirement. See link
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.PromoteYou can promote a MySQL, MariaDB, or PostgreSQL Read Replica into a stand-alone, single-AZ DB instance. When you promote a Read Replica, the DB instance will be rebooted before it becomes available.
184. Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load’* Why?
- A. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the ROS Instance will not Be able to handle the load If one cache node fails.
- B. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.
- C. Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.
- D. No if the cache node fails you can always get the same data from the DB without having any availability impact.
- https://aws.amazon.com/elasticache/faqs/
- http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/BestPractices.AOF.html
- Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
185. You are responsible for a legacy web application whose server environment is approaching end of life. You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VM’s single 10GB VMDK is almost full Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely under utilized. It is currently running on a highly customized Windows VM within a VMware environment: You do not have me installation media This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?
- A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2.
- B. Use Import/Export to import the VM as an EBS snapshot and attach to EC2.
- C. Use S3 to create a backup of the VM and restore the data into EC2.
- D. Use me ec2-bundle-instance API to Import an Image of the VM into EC2
- https://aws.amazon.com/ec2/vm-import/
- If you use the VMware vSphere virtualization platform, you can also use the AWS Management Portal for vCenter to import your VM.
186. An International company has deployed a multi-tier web application that relies on DynamoDB in a single region For regulatory reasons they need disaster recovery capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
- A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
- B. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to QynamoDB in the second region.
- C. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region.
- D. Send also each Ante into an SQS queue in me second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.
- Q: What is a DynamoDB cross-region replication?
DynamoDB cross-region replication allows you to maintain identical copies (called replicas) of a DynamoDB table (called master table) in one or more AWS regions. After you enable cross-region replication for a table, identical copies of the table are created in other AWS regions. Writes to the table will be automatically propagated to all replicas.
188.
Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SOS) to adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement
which of the following features in a cost effective and efficient manner?
- A. Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
- B. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.
- C. Implement message passing between EC2 instances within a batch by exchanging messages through SOS.
- D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.
- E. Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
- For B, not necessary to backing up message to S3 as SQS is for reliable, hight scalable.
- http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/general-recommendations.html
- D is correct according to many sites but without explanation.
189. Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective(RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?
- A. Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.
- B. Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
- C. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
- D. Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2. Auto-Scaling and ELB resources to support deploying the application across Multiple-Ability Zones. Asynchronously replicate transactions from your onpremises database to a database instance in AWS across a secure VPN connection.
- Direct Connect is very expensive
- S3 is more expensive than EBS
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html
- Overview of Creating Amazon EBS-Backed AMIsFirst, launch an instance from an AMI that’s similar to the AMI that you’d like to create. You can connect to your instance and customize it. When the instance is configured correctly, ensure data integrity by stopping the instance before you create an AMI, then create the image. When you create an Amazon EBS-backed AMI, we automatically register it for you.
Amazon EC2 powers down the instance before creating the AMI to ensure that everything on the instance is stopped and in a consistent state during the creation process. If you’re confident that your instance is in a consistent state appropriate for AMI creation, you can tell Amazon EC2 not to power down and reboot the instance. Some file systems, such as XFS, can freeze and unfreeze activity, making it safe to create the image without rebooting the instance.During the AMI-creation process, Amazon EC2 creates snapshots of your instance’s root volume and any other EBS volumes attached to your instance. If any volumes attached to the instance are encrypted, the new AMI only launches successfully on instances that support Amazon EBS encryption. For more information, see Amazon EBS Encryption. Depending on the size of the volumes, it can take several minutes for the AMI-creation process to complete (sometimes up to 24 hours).You may find it more efficient to create snapshots of your volumes prior to creating your AMI. This way, only small, incremental snapshots need to be created when the AMI is created, and the process completes more quickly (the total time for snapshot creation remains the same). For more information.
After the process completes, you have a new AMI and snapshot created from the root volume of the instance. When you launch an instance using the new AMI, we create a new
EBS volume for its root volume using the snapshot. Both the AMI and the snapshot incur
Leaders in it certification (D4D.us) 32
charges to your account until you delete them. For more information, see Deregistering
Your AMI.If you add instance-store volumes or EBS volumes to your instance in addition to the root
device volume, the block device mapping for the new AMI contains information for these
volumes, and the block device mappings for instances that you launch from the new AMI
automatically contain information for these volumes. The instance-store volumes specified
in the block device mapping for the new instance are new and don’t contain any data from
the instance store volumes of the instance you used to create the AMI. The data on EBS
volumes persists. For more information, see Block Device Mapping.
190. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?
- A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.
- B. Use synchronous database master-slave replication between two availability zones.
- C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.
- D. Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.
- https://aws.amazon.com/blogs/aws/new-whitepaper-use-aws-for-disaster-recovery/
- For B, replication cannot be back in time
191. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably?
- A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
- B. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
- C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
- D. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.
192. You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers)
- A. Latency resource record sets cannot be used in combination with weighted resource record sets.
- B. You did not setup an http health check for one or more of the weighted resource record sets associated with me disabled web servers.
- C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
- D. One of the two working web servers in the other region did not pass its HTTP health check.
- E. You did not set “Evaluate Target Health” to “Yes” on the latency alias resource record set associated with example com in the region where you disabled the servers.
- http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html
193. Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design for the application that leverages multiple regions for the most recently accessed content and latency sensitive portions of the web site The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application’s requirements?
- A. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.
- B. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region.
- C. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates.
- D. Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each ElastiCache cluster.
- http://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsExamples.html
- The copy operation creates a copy of an object that is already stored in Amazon S3. You can create a copy of your object up to 5 GB in a single atomic operation. However, for copying an object that is greater than 5 GB, you must use the multipart upload API. Using the
copy
operation, you can: - S3 Copy API is not used for this use case as User access contents are different between AZ.
194. Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to:
– launch, start stop, and terminate development resources.
– launch and start production instances.
- A. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
- B. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
- C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
- D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
- For A, Termination protection is option in EC2
- https://aws.amazon.com/blogs/aws/resource-permissions-for-ec2-and-rds-resources/
- IAM policies to support a number of important EC2 use cases. Here are just a few of things that you can do:
- Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
- Set different permissions for “development” and “test” resources.
- Control which users can terminate which instances.
- Require additional security measures, such as MFA authentication, when acting on certain resources.
- following actions on the indicated resources now support resource-level permissions:
- Instances – Reboot, Start, Stop, Terminate.
- EBS Volumes – Attach, Delete, Detach.
195. A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer’s end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter.
Which of the following options provide a viable solution to remedy this situation? (Choose 2 answers)
- A. Add a route to the route table with an iPsec VPN connection as the target.
- B. Enable route propagation to the virtual private gateway (VGW).
- C. Enable route propagation to the customer gateway (CGW).
- D. Modify the route table of all Instances using the ‘route’ command.
- E. Modify the Instances VPC subnet route table by adding a route back to the customer’s on-premises environment.
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html
- instances that you launch into a virtual private cloud (VPC) can’t communicate with your own network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, and updating your security group rules.
- AWS supports Internet Protocol security (IPsec) VPN connections
- For C, if no customer GW, no access to AWS Services from on premises dc
196. Your company previously configured a heavily used, dynamically routed VPN connection between your onpremises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following
options win provide the most seamless transition for your users?
- A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.
- B. Configure your DireclConnect router with a higher BGP priority than your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.
- C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
- D. Configure your DireclConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP pointy. And verify network traffic is leveraging the DirectConnect connection
- FAQ. Can I use AWS Direct Connect and a VPN Connection to the same VPC simultaneously?
Yes. However, only in fail-over scenarios. The Direct Connect path will always be preferred, when established, regardless of AS path prepending.
197. A web company is looking to implement an external payment service into their highly available application deployed in a VPC. Their application EC2 instances are behind a public facing ELB. Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API. How should they architect their solution?
- A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances.
- B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.
- C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.
- D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.
Enabling Internet Access
To enable access to or from the Internet for instances in a VPC subnet, you must do the following:- Attach an Internet gateway to your VPC.
- Ensure that your subnet’s route table points to the Internet gateway.
- Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
- Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.
198. You are designing the network infrastructure for an application server in Amazon VPC Users will access all the application instances from the Internet as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements?
- A. Configure a single routing Table with a default route via the Internet gateway, Propagate a default route via BGP on the AWS Direct Connect customer router, Associate the routing table with all VPC subnets.
- B. Configure a single routing table with a default route via the internet gateway, Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router, Associate the routing table with all VPC subnets.
- C. Configure a single routing table with two default routes: one to the internet via an Internet gateway, the other to the on-premises network via the VPN gateway use this routing table across all subnets in your VPC.
- D. Configure two routing tables one that has a default route via the Internet gateway and another that has a default route via the VPN gateway Associate both routing tables with each VPC subnet.
- 2 default route(0.0.0.0/0) is confused to traffic which one to directe.
199. You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct connect for access to services such as Amazon S3?
- A. Configure a public Interface on your AWS Direct Connect link, Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP.
- B. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC.
- C. Create a public interface on your AWS Direct Connect link, Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS.
- D. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.
- https://aws.amazon.com/premiumsupport/knowledge-center/public-private-interface-dx/
- Public Virtual InterfaceTo connect to AWS public endpoints (for example, Amazon EC2 or Amazon S3) with dedicated network performance, use a public virtual interface.
A public virtual interface allows you to connect to all AWS public IP spaces for your respective region. AWS Direct Connect is a regional service with the exception that in North America, you can reach Amazon public services in other North America regions. For a list of AWS public IP ranges available by region for the public virtual interface, see AWS IP Address Ranges.Private Virtual InterfaceTo connect to private services such as an Amazon VPC with dedicated network performance, use a private virtual interface.
A private virtual interface allows you to connect to your VPC resources (for example, EC2 instances, load balancers, RDS DB instances, etc.) on your private IP address or endpoint. Each private virtual interface can only be associated with one virtual private gateway (VGW). A virtual private gateway is associated with a single VPC. For a private virtual interface, AWS only advertises the entire VPC CIDR over the BGP neighbor.Note: AWS cannot advertise or suppress specific subnet blocks in the VPC for a private virtual interface.
- FAQ. What IP prefixes should I advertise over BGP for virtual interfaces to public AWS services?
You should advertise appropriate public IP prefixes that you own over BGP. Traffic from AWS services destined for these prefixes will be routed over your AWS Direct Connect connection.
200. You have deployed a three-tier web application in a VPC with a CIDR block of 10 0 0 0/28 You initially deploy two web servers, two application servers, two database servers and one NAT instance for a total of seven EC2 instances The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could De the root caused? (Choose 2 answers)
- A. The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches.
- B. AWS reserves one IP address In each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances.
- C. AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
- D. The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches.
- E. AWS reserves the first four and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
201. You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an ecommerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC.
The configuration is as follows:
VPC vpc-2f8t>C447
IGVV ig-2d8bc445
NACL acl-2080c448
Subnets and Route Tables:
Web server’s subnet-258Dc44d
Application server’s suDnet-248bc44c
Database server’s subnet-9189c6f9
Route Tables:
rtb-2i8bc449
rtb-238bc44b
Associations:
Subnet-258bc44d: rtb-2i8bc449 public
Subnet-248DC44c: rtb-238bc44b private
Subnet-9189c6f9: rtb-238bc44b private
You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?
- A. Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb-238bc44b to subnet-258bc44d.
- B. Add a route from rtD-238bc44D to igw-2d8bc445 and add a bastion and NAT instance within suonet-248bc44c.
- C. Create a bastion and NAT Instance In subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445. And a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.(Route should point to NAT and not Internet Gateway else it would be internet accessible.)
- D. Create a bastion and Nat instance in suDnet-258Dc44d and add a route from rtD-238Dc44D to the NAT instance.
- http://jayendrapatil.com/tag/bastion-host/
- Bastion and NAT should be in the public subnet. As Web Server has direct access to Internet, the subnet subnet-258bc44d should be public and Route rtb-2i8bc449 pointing to IGW. Route rtb-238bc44b for private subnets should point to NAT for outgoing internet access
202. You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture.
Which alternatives should you consider? (Choose 2 answers)
- A. Configure a NAT instance in your VPC, Create a default route via the NAT instance and associate it with all subnets, Configure a DNS A record that points to the NAT instance public IP address.
- B. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers, Configure a Route53 CNAME record to your CloudFront distribution.
- C. Place all your web servers behind ELB Configure a Route53 CNMIE to point to the ELB DNS name.
- D. Assign IPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.
- E. Configure ELB with an EIP, Place all your Web servers behind ELB, Configure a Route53 A record that points to the EIP.
203. You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
- A. An AWS Direct Connect link between the VPC and the network housing the internal services.
- B. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
- C. An Elastic IP address on the VPC instance (Don’t need a EIP as private subnets can also interact with on-premises network)
- D. An IP address space that does not conflict with the one on-premises
- E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses (Route 53 is not required)
- F. A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can’t be configured from scratch)
- http://jayendrapatil.com/aws-direct-connect-dx/
204. You are migrating a legacy client-server application to AWS The application responds to a specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple application servers and a database server Remote clients use TCP to connect to the application sservers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code but you have to file a change request. How would you implement the architecture on AWS In order to maximize scalability and high ability?
- A. File a change request to implement Proxy Protocol support In the application Use an EL8 with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
- B. File a change request to Implement Cross-Zone support in the application Use an EL8 with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
- C. File a change request to implement Latency Based Routing support in the application Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.
- D. File a change request to implement Alias Resource support in the application Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.
- https://aws.amazon.com/blogs/aws/elastic-load-balancing-adds-support-for-proxy-protocol/
205. A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability Which is the most appropriate?
- A. Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.(Reusing Commercial search application which is nearing end of life not a good option for cost)
- B. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
- C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
- D. Use a single-AZ RDS MySQL instance to store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.
- E. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container for the website on EC2 instances and use Route53 with DNS round-robin.
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KDYBvdBGdgy_F3JMGNb/?answer=-KDoS13Cw7TDXg6QqfyY
- Amazon CloudSearch is a fully managed service in the cloud that makes it easy to set up, manage, and scale a search solution for your website. Amazon CloudSearch enables you to search large collections of data such as web pages, document files, forum posts, or product information. With Amazon CloudSearch, you can quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning, setup, and maintenance. As your volume of data and traffic fluctuates, Amazon CloudSearch automatically scales to meet your needs.
206. A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers)
- A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
- B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service(STS
AssumeRoleWithSAML
,response to the client app includes temporary security credentials.) to assume that IAM role The application can use the temporary credentials to access the appropriate S3 bucket. () - C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service(GetFederationToken) to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. (IAM Security Token Service(GetFederationToken) must include an IAM policy and a duration (1 to 36 hours), along with a policy that specifies the permissions to be granted to the temporary security credentials. returns four values to the application: An access key, a secret access key, a token, and a duration (the token’s lifetime))
- D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
- E. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
207. You are designing a multi-platform web application for AWS. The application will run on EC2 instances and will be accessed from PCs. tablets and smart phones. Supported accessing platforms are Windows. MACOS. IOS and Android. Separate sticky session and SSL certificate setups are required for different platform types which of the following describes the most cost effective and performance efficient architecture setup?
- A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC.
- B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.
- C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.
- D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs. (Session stickiness requires HTTPS listener with SSL termination on the ELB and ELB does not support multiple SSL certs so one is required for each cert)
- Sticky Session as know as Session affinity
208. Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers A web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required?
- A. Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
- B. Migrate to AWS Use VM import ‘Export to quickly convert an on-premises web server to an AMI create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database. (Migrate cannot be “short timeframe“)
- C. Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone (lie import and leverage Route53 DNS failover to failover to the S3 hosted website.
- D. Hybrid environment Create an AMI which can be used of launch web servers in EC2 Create an Auto Scaling group which uses the AMI to scale the web tier based on incoming traffic Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
- As “burst In web traffic due to a company announcement”, the requests might enquiry the similar information, so cache is more important. So CloudFront is a better choice to cache objects.
209.Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements. Customers can show oft their Individuality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUD across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?
- Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling group of G2 instances in a placement group.
- B. Use Amazon Simple Workflow (SWF) 10 manages assessments, movement of data & meta-data Use an autoscaling group of G2 instances in a placement group.
- C. Use Amazon Simple Workflow (SWF) l0 manages assessments, movement of data & meta-data Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization).
- D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).
- For B, from https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KJ_y36i7L_ibcTjYQzS/?answer=-KM81FEE-u0tN4hv_TYw
- “highest standards Assessments are a mixture of human and automated assessments” – clearly this can be done by SWF. SWF has the workers and deciders. workers can do the automated checks and deciders can notify the human owners to take care of the human assessments.“custom electronics using GPUs with CUDA.” – this needs G2 type of EC2 instances. https://aws.amazon.com/ec2/instance-types/
210. You’re running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?
- A. Use Storage Gateway and configure it to use Gateway Cached volumes.
- B. Configure your backup software to use S3 as the target for your data backups.
- C. Configure your backup software to use Glacier as the target for your data backups.
- D. Use Storage Gateway and configure it to use Gateway Stored volumes.
- FAQ: How much volume data can I manage per gateway? What is the maximum size of a volume?Each volume gateway can support up to 32 volumes. In cached mode, each volume can be up to 32 TB for a maximum of 1 PB of data per gateway (32 volumes, each 32 TB in size). In stored mode, each volume can be up to 16 TB for a maximum of 512 TB of data per gateway (32 volumes, each 16 TB in size).Volume gateways compress data before that data is transferred to AWS and while stored in AWS. This compression can reduce both data transfer and storage charges. Volume storage is not pre-provisioned; you will be billed for only the amount of data stored on the volume, not the size of the volume you create.
- FROM, Data is hosted on the On-premise server as well. The requirement for 140TB is for file server On-Premise more to confuse and not in AWS. Just need a backup solution hence stored instead of cached volumes)
211. You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2 8x large Instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?
- A. Create more smaller flies on Amazon S3.
- B. Add additional cc2 8x large instances by introducing a task group.
- C. Use smaller instances that have higher aggregate I/O performance.
- D. Create fewer, larger files on Amazon S3.
- FAQ : How do I select the right number of instances for my cluster?The number of instances to use in your cluster is application-dependent and should be based on both the amount of resources required to store and process your data and the acceptable amount of time for your job to complete. As a general guideline, we recommend that you limit 60% of your disk space to storing the data you will be processing, leaving the rest for intermediate output. Hence, given 3x replication on HDFS, if you were looking to process 5 TB on m1.xlarge instances, which have 1,690 GB of disk space, we recommend your cluster contains at least (5 TB * 3) / (1,690 GB * .6) = 15 m1.xlarge core nodes. You may want to increase this number if your job generates a high amount of intermediate data or has significant I/O requirements. You may also want to include additional task nodes to improve processing performance. See Amazon EC2 Instance Types for details on local instance storage for each instance type configuration.
212. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
- A. Use reduced redundancy storage (RRS) for PDF and csv data in Amazon S3. Add Spot instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshift.
- B. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs use Reserved instances for Amazon Redshift.
- C. Use reduced redundancy storage (RRS) for all data in Amazon S3 Add Spot Instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshitf.
- D. Use reduced redundancy storage (RRS) for PDF and csv data in S3 Add Spot Instances to EMR jobs Use Spot Instances for Amazon Redshift.
- https://aws.amazon.com/ec2/pricing/
- https://acloud.guru/forums/aws-certified-solutions-architect-professional/discussion/-KgwThe-_l1QZAX0hh9e/emr_question_-_s3_rrs_data_int
213. You are the new IT architect in a company that operates a mobile sleep tracking application When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend. The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night’s data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (he mobile app Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost what would you recommend? (Choose 2 answers)
- A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
- B. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.
- C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
- D. Introduce Amazon Elasticache lo cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
- E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
214. Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery’?
- A. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days CloudFront to serve HLS transcoded videos from S3
- B. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days CloudFront to serve HLS transcoding videos from Glacier
- C. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS EBS volumes to host videos and EBS snapshots to incrementally backup original rues after a few days.CloudFront to serve HLS transcoded videos from EC2.
- D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue E8S volumes to host videos and EBS snapshots to incrementally backup original files after a few days CloudFront to serve HLS transcoded videos from EC2
215. You’ve been hired to enhance the overall security posture for a very large e-commerce site They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and DynamoOB for their dynamic data and then archiving nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack?
- A. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC.(Not cost effective)
- B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.(does not protect against new source)
- C. Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
- D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality. (No advanced protocol filtering in ELB)
- http://jayendrapatil.com/tag/ddos/
216. You currently operate a web application In the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2.IAM And RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?
- A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.
- B. Create a new cloudTrail with one new S3 bucket to store the logs Configure SNS to send log file delivery notifications to your management system Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs.
- C. Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.
- D. Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.
- CloudTrail Global Service : For most services, events are sent to the region where the action happened. For global services such as IAM, AWS STS, and Amazon CloudFront, events are delivered to any trail that includes global services. AWS OpsWorks and Amazon Route 53 actions are logged in the US East (N. Virginia) Region.http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-global-service-events
217. An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise’s account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions?
- A. From the AWS Manage//docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
218. An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without
exposing API credentials?
- A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile. (Association should be done in Instance Configuration instead of IAM Role)
- B. Use me Parameter section in the Cloud Formation template to have the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table.
- C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
- D. Create an identity and Access Management user in the CioudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.
219. An AWS customer is deploying an application that is composed of an AutoScaling group of EC2 Instances. The customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance id. In addition an x 509 certificates must Designed by the customer’s Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements?
- A. Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot.
- B. Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the launched instances generate a certificate signature request with the instance’s assigned instance-id to the Key management service for signature.
- C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.
- D. Configure the launched instances to generate a new certificate upon first boot Have the Key management service poll the AutoScaling group for associated instances and send new instances a certificate signature that contains the specific instance-id. (Not instance generate certificate )
220. Your company has recently extended its datacenter into a VPC on AVVS to add burst computing capacity as needed. Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don’t want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console Which option below will meet the needs for your NOC members?
- A. Use OAuth 2 0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AVVS Management Console.
- B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
- C. Use your on-premises SAML 2 O-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
- D. Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.
- http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html
221. You are designing an SSUTLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)
- A. Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it.
- B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.
- C. Configure ELB with HTTPS listeners, and place the Web servers behind it.
- D. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.(CloudFront does not Client-Side ssl certificates)
Client-Side SSL Authentication
CloudFront does not support client authentication with client-side SSL certificates. If an origin requests a client-side certificate, CloudFront drops the request.
- A server-side cert is used to authenticate and identify the server to the client, as well as to encrypt the connection. This allows the client to have certain assurances when connecting to and communicating with the server. Sites that require security such as banks, etc. use them.
- A client-side cert is used to authenticate the client to the server. In this way the server can be certain of who is connecting to the server in much the same way as with a username/password pair, but usually without requiring interaction with the user. They are used with services where the client must be identified but there may not necessarily be someone to enter a username and password, or such is not desired.
222. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC. Your server’s on-premises will be communicating with your VPC instances. You will be establishing IPSec tunnels over the internet. You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers)
- A. End-to-end protection of data in transit
- B. End-to-end Identity authentication
- C. Data encryption across the Internet
- D. Protection of data in transit over the Internet
- E. Peer identity authentication between VPN gateway and customer gateway
- F. Data integrity protection across the Internet
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KLhG3ugHuXinc3rHU3p/ipsec-vpn?answer=-KLjX8HgLEjn9sAYBrms
- http://docs.aws.amazon.com/AmazonVPC/latest/NetworkAdminGuide/Introduction.html
- We recommend you use the techniques listed in the following table to minimize problems related to the amount of data that can be transmitted through the IPsec tunnel. Because the connection encapsulates packets with additional network headers (including IPsec), the amount of data that can be transmitted in a single packet is reduced.
223.You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IDS/IPS protection for traffic coming from the Internet. Which of the following options would you consider? (Choose 2 answers)
- A. Implement IDS/IPS agents on each Instance running In VPC
- B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic.(virtual instance running in promiscuous mode to receive or“sniff” traffic)
- C.Implement Elastic Load Balancing with SSL listeners In front of the web applications(ELB with SSL does not serve as IDS/IPS)
- D.Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server.
- Intrusion Detection & Prevention SystemsEC2 Instance IDS/IPS solutions offer key features to help protect your EC2 instances. This includes alerting administrators of malicious activity and policy violations, as well as identifying and taking action against attacks. You can use AWS services and third party IDS/IPS solutions offered in AWS Marketplace to stay one step ahead of potential attackers.
224. You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket.Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application?
- A. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app and use them to access Amazon S3.
- B. Record the user’s Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service ‘AssumeRole’ function Store these credentials in the mobile app’s memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app.
- C. Record the user’s Information In Amazon DynamoDB. When the user uses their mobile app create temporary credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app’s memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app.
- D. Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
- E. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to access Amazon S3.
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KLmsIhGny5Cw_sl75Qg/?answer=-KLqvBAXk7iFnpAZI1p0
- http://www.aiotestking.com/amazon/what-should-your-server-side-application-do-when-a-new-user-registers-on-the-photo-sharing-mobile-application/
225. You have an application running on an EC2 Instance which will allow users to download flies from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
- A. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
- B. Create a IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data.
- C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata
- D. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
- An application on the instance retrieves the security credentials provided by the role from the instance metadata item
iam/security-credentials/
role-name. The application is granted the permissions for the actions and resources that you’ve defined for the role through the security credentials associated with the role. These security credentials are temporary and we rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
226. You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? (Choose 3 answers)
- A. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.
- B. Use dedicated instances to ensure that each instance has the maximum performance possible.
- C. Use an Amazon CloudFront distribution for both static and dynamic content.
- D. Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational Database Service (RDS) tiers
- E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization.
- F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
- https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/
-
In general, there are three major application patterns that we see from customers: web, non-web and load balanceable, and non-web and non-load balanceable.
227. A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them. Which activity would be useful in defending against this attack?
- A. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (internet Gateway)
- B. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
- C. Create 15 Security Group rules to block the attacking IP addresses over port 80(Security Groups are not capable of blocking specific incoming connections)
- D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KEy517rr7E9EsaVgE64/security-quiz-question
228. Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware. The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)
- A. Setting up a federation proxy or identity provider
- B. Using AWS Security Token Service to generate temporary tokens
- C. Tagging each folder in the bucket
- D. Configuring IAM role
- E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
229. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)
- A. Implement third party volume encryption tools
- B. Do nothing as EBS volumes are encrypted by default
- C. Encrypt data inside your applications before storing it on EBS
- D. Encrypt data using native data encryption drivers at the file system level
- E. Implement SSL/TLS for all services running on the server
230. You have a periodic Image analysis application that gets some files as Input, analyzes them and for each file writes some data in output to a big file. The number of files in input per day is high and concentrated in a few hours of the day. Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process. What services could be used to reduce the elaboration time and improve the availability of the solution?
- A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue
- B. EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
- C. S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
- D. EBS with Provisioned IOPS (PIOPS) to store I/O files SOS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
- Anti-Patterns Amazon S3 is optimal for storing numerous classes of information that are relatively static and benefit from its durability, availability, and elasticity features. However, in a number of situations Amazon S3 is not the optimal solution. Amazon S3 has the following anti-patterns:
- File system—Amazon S3 uses a flat namespace and isn’t meant to serve as a standalone, POSIX-compliant file system. However, by using delimiters (commonly either the ‘/’ or ‘\’ character) you are able construct your keys to emulate the hierarchical folder structure of file system within a given bucket.
- Structured data with query—Amazon S3 doesn’t offer query capabilities: to retrieve a specific object you need to already know the bucket name and key. Thus, you can’t use Amazon S3 as a database by itself. Instead, pair Amazon S3 with a database to index and query metadata about Amazon S3 buckets and objects.
- Rapidly changing data—Data that must be updated very frequently might be better served by a storage solution with lower read / write latencies, such as Amazon EBS volumes, Amazon RDS or other relational databases, or Amazon DynamoDB.
- Backup and archival storage—Data that requires long-term encrypted archival storage with infrequent read access may be stored more cost-effectively in Amazon Glacier.
- Dynamic website hosting—While Amazon S3 is ideal for websites with only static content, dynamic websites that depend on database interaction or use server-side scripting should be hosted on Amazon EC2.
231. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
- A. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
- B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
- C. Write click events directly to Amazon Redshift and then analyze with SQL
- D. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with sol
- Real time = Kinesis
- Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
232. An AWS customer runs a public blogging website. The site users upload two million blog entries a month The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user’s load times. Which of the following recommendations would you make to the customer?
- A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity
- B. Create a CloudFront distribution with “US’Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations’ for the remaining users.
- C. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry’s location in S3 according to the month it was uploaded to be used with CloudFront behaviors.
- D. Create a CloudFronl distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.
233. Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
- A. Deploy ElasticCache in-memory cache running in each availability zone
- B. Implement sharding to distribute load to multiple RDS MySQL instances
- C. Increase the RDS MySQL Instance size and Implement provisioned IOPS
- D. Add an RDS MySQL read replica in each availability zone
234. A company is running a batch analysis every hour on their main transactional DB. running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift During the execution of the batch. Their transactional applications are very slow when the batch completes. They need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team.
How would you optimize this scenario to solve performance issues and automate the process as much as possible?
- A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
- B. Replace RDS with Redsnift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
- C. Create an RDS Read Replica for the batch analysis and SNS to notify the on-premises system to update the dashboard
- D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
235.You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP/S connections to specific domains from their EC2-hosted applications. You deploy a single EC2 instance running proxy software and configure it to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration. You have a nightly maintenance window of 10 minutes where all instances fetch new software updates. Each update is about 200MB in size and there are 500 instances in the VPC that routinely fetch updates. After a few days you notice that some machines are falling to successfully download some, but not all, of their updates within the maintenance window. The download URLs used for these updates are correctly listed in the proxy’s whitelist configuration and you are able to access them manually using a web browser on the instances. What might be happening? (Choose 2 answers)
- A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time.
- B. You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up. causing some requests to fall
- C. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW)
- D. You are running the proxy on a affilelentiy-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance
- E.The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
- https://acloud.guru/forums/aws-certified-solutions-architect-professional/discussion/-KGXk5Feqh4hQm1Bjt9U/tricky_network_question
236. To serve Web traffic for a popular product your chief financial officer and IT director have purchased 10 ml large heavy utilization Reserved Instances (RIs) evenly spread across two availability zones: Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity As a result, your company purchases two C3.2xlarge medium utilisation RIs You register the two c3 2xlarge instances with your ELB and quickly find that the ml large instances are at 100% of capacity and the c3 2xlarge instances have significant capacity that’s unused Which option is the most cost effective and uses EC2 capacity most effectively?
- A. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin
- B. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more on-demand mi large instances when triggered by Cloudwatch shut off c3 2xiarge instances
- C. Route traffic to EC2 ml large and c3 2xlarge instances directly using Route 53 latency based routing and health checks shut off ELB
- D. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscailng group for up to two additional c3.2xlarge instances Shut on mi .large instances.
237. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
- A. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDSwith read replicas
- B. Stateful instances for me web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
- C. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch. And multi-AZ RDS
- D. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS
238. You are running a news website in the eu-west-1 region that updates every 15 minutes. The website has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database. Static content resides on Amazon S3, and is distributed through Amazon CloudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an Amazon RDS extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%. While freeable memory is in the 2 GB range. Web analytics reports show that the average load time of your web pages is around 1 5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you improve page load times for your users? (Choose 3 answers)
- A. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively.
- B. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries
- C. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site
- D. Switch Amazon RDS database to the high memory extra large Instance type
- E. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region. (Doesn’t help for bottleneck of RDS because RDS doesn’t support Multi-AZ but Replica instead)
239.A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real-estate offers in proximity to their location.For the alerts to be relevant delivery time needs to be in the low minute count. The existing mobile app has 5million users across the USA. Which one of the following architectural suggestions would you make to the customer?
- A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.
- B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications ‘ location through carrier connection: ROS will be used to store and relevant relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application
- C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB AWS Mobile Push will be used to send offers to the mobile application
- D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
- http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html : With Amazon SNS, you have the ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts.
- FAQ: How can I interact with Amazon SQS?
You can access Amazon SQS using the AWS Management Console, which helps you create Amazon SQS queues and send messages easily.
Amazon SQS also provides a web services API. It is also integrated with the AWS SDKs, allowing you to work in the programming language of your choice.
240. A company is building a voting system for a popular TV show, viewers will watch the performances then visit the show’s website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?
- A. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
- B. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote.
- C. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table.
- D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login. With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.
- Provisioned Throughput Minimums and Maximums
- For any table or global secondary index, the minimum settings for provisioned throughput are 1 read capacity unit and 1 write capacity unit.
- An AWS account places some initial maximum limits on the throughput you can provision:
- US East (N. Virginia) Region:
- Per table – 40,000 read capacity units and 40,000 write capacity units
- Per account – 80,000 read capacity units and 80,000 write capacity units
- All Other Regions:
- Per table – 10,000 read capacity units and 10,000 write capacity units
- Per account – 20,000 read capacity units and 20,000 write capacity units
- The provisioned throughput limit includes the sum of the capacity of the table together with the capacity of all of its global secondary indexes.
241.You are developing a new mobile application and are considering storing user preferences in AWS, which would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
- A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
- B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
- C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
- D. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
- http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items
- The maximum item size in DynamoDB is 400 KB
- FAQ:Q: DynamoDB’s storage cost seems high. Is this a cost-effective service for my use case?As with any product, we encourage potential customers of Amazon DynamoDB to consider the total cost of a solution, not just a single pricing dimension. The total cost of servicing a database workload is a function of the request traffic requirements and the amount of data stored. Most database workloads are characterized by a requirement for high I/O (high reads/sec and writes/sec) per GB stored. Amazon DynamoDB is built on SSD drives, which raises the cost per GB stored, relative to spinning media, but it also allows us to offer very low request costs. Based on what we see in typical database workloads, we believe that the total bill for using the SSD-based DynamoDB service will usually be lower than the cost of using a typical spinning media-based relational or non-relational database. If you have a use case that involves storing a large amount of data that you rarely access, then DynamoDB may not be right for you. We recommend that you use S3 for such use cases.It should also be noted that the storage cost reflects the cost of storing multiple copies of each data item across multiple facilities within an AWS Region.
- Q: When should I use Amazon DynamoDB vs Amazon S3?Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
242. Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following.
- A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
- B. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC’s IP address block.
- C. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
- D. Create your RDS instance separately and pass its DNS name to your’s DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.(Not optimal for security adding individual hosts)
- FAQ: What are the Cloud resources powering my AWS Elastic Beanstalk application?AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, ELB, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI.
- Q: What database solutions can I use with AWS Elastic Beanstalk?AWS Elastic Beanstalk does not restrict you to any specific data persistence technology. You can choose to use Amazon Relational Database Service (Amazon RDS) or Amazon DynamoDB, or use Microsoft SQL Server, Oracle, or other relational databases running on Amazon EC2.Q: How do I set up a database for use with AWS Elastic Beanstalk?Elastic Beanstalk can automatically provision an Amazon RDS DB instance. The information about connectivity to the DB instance is exposed to your application by environment variables. To learn more about how to configure RDS DB instances for your environment, see the Elastic Beanstalk Developer Guide.
243. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
- A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. (you cannot automatically “inherit” that policy)
- B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
- C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
- D. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
- http://www.aiotestking.com/amazon/identify-which-option-will-allow-you-to-achieve-this-goal/
244. Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
- A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
- B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs
- C. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs
- For B, due to the trigger phrase “real-time”, which I ran through Google Translate, and it translated to “Kinesis” using language “AWS cert exam”. Kinesis would also accomplish the requirement to retain logs for 12 hours for further analysis, since the default retention period for Kinesis is 24 hours. (Which is also the minimum, you can increase it up to 168 hours)
245. You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using CloudFront for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard’?
- A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
- B. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
- C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
- D. Use Elastic Beanstalk “Rebuild Environment” option to update log delivery to the Elastic Map Reduce job.
- E. Use Elastic Beanstalk ‘Restart App server(s)” option to update log delivery to the Elastic Map Reduce job.
- https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-K8mnKo1bA9C88cJwZly/please_answer_this_tricky_ques
Choosing an Amazon S3 Bucket for Your Access Logs
When you enable logging for a distribution, you specify the Amazon S3 bucket that you want CloudFront to store log files in. If you’re using Amazon S3 as your origin, we recommend that you do not use the same bucket for your log files; using a separate bucket simplifies maintenance.You can store the log files for multiple distributions in the same bucket. When you enable logging, you can specify an optional prefix for the file names, so you can keep track of which log files are associated with which distributions.If no users access your content during a given hour, you don’t receive any log files for that hour.
246. You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.
- A. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
- B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
- C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica.
- D. Generate the reports by querying the ElasliCache database caching tier.
247. A web company is looking to implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC. How should they architect their solution to achieve these goals?
- A. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC.
- B. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides.
- C. Configure servers running in the VPC using the host-based ‘route’ commands to send all traffic through the platform to a scalable virtualized IDS/IPS.
- D. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.
- https://d0.awsstatic.com/Marketplace/scenarios/security/SEC_01_TSB_Final.pdf
248. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers, which takes quite a while and is therefore only done once per week.
Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound. Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles.
What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way?
- A. Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe
- B. Create one AWS Ops Works stack create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)
- C. Create two AWS Ops Works stacks create two AWS Ops Works layers create one custom recipe
- D. Create two AWS Ops Works stacks create two AWS Ops Works layers create two custom recipe
249.Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?
- A. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage.
- B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed,
- C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier.
- D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.
250. What does Amazon S3 stand for?
- A. Simple Storage Solution.
- B. Storage Storage Storage (triple redundancy Storage).
- C. Storage Server Solution.
- D. Simple Storage Service.
251. You must assign each server to at least _____ security group
- A. 3
- B. 2
- C. 4
- D. 1
252. Before I delete an EBS volume, what can I do if I want to recreate the volume later?
- A. Create a copy of the EBS volume (not a snapshot)
- B. Store a snapshot of the volume
- C. Download the content to an EC2 instance
- D. Back up the data in to a physical disk
253. Select the most correct answer: The device name /dev/sda1 (within Amazon EC2) is _____
- A. Possible for EBS volumes
- B. Reserved for the root device
- C. Recommended for EBS volumes
- D. Recommended for instance store volumes
254. If I want an instance to have a public IP address, which IP address should I use?
- A. Elastic IP Address
- B. Class B IP Address
- C. Class A IP Address
- D. Dynamic IP Address
255. What does RRS stand for when talking about S3?
- A. Redundancy Removal System
- B. Relational Rights Storage
- C. Regional Rights Standard
- D. Reduced Redundancy Storage
256. All Amazon EC2 instances are assigned two IP addresses at launch, out of which one can only be reached from within the Amazon EC2 network?
- A. Multiple IP address
- B. Public IP address
- C. Private IP address
- D. Elastic IP Address
257. What does Amazon SWF stand for?
- A. Simple Web Flow
- B. Simple Work Flow
- C. Simple Wireless Forms
- D. Simple Web Form
258. What is the Reduced Redundancy option in Amazon S3?
- A. Less redundancy for a lower cost.
- B. It doesn’t exist in Amazon S3, but in Amazon EBS.
- C. It allows you to destroy any copy of your files outside a specific jurisdiction.
- D. It doesn’t exist at all
259. Fill in the blanks: Resources that are created in AWS are identified by a unique identifier called an __________
- A. Amazon Resource Number
- B. Amazon Resource Nametag
- C. Amazon Resource Name
- D. Amazon Resource Namespace
260. If I write the below command, what does it do?
ec2-run ami-e3a5408a -n 20 -g appserver
- A. Start twenty instances as members of appserver group.
- B. Creates 20 rules in the security group named appserver
- C. Terminate twenty instances as members of appserver group.
- D. Start 20 security groups
261. While creating an Amazon RDS DB, your first task is to set up a DB ______ that controls what IP addresses or EC2 instances have access to your DB Instance.
- A. Security Pool
- B. Secure Zone
- C. Security Token Pool
- D. Security Group
262. When you run a DB Instance as a Multi-AZ deployment, the “_____” serves database writes and reads
- A. secondary
- B. backup
- C. stand by
- D. primary
263. Every user you create in the IAM system starts with _________.
- A. Partial permissions
- B. Full permissions
- C. No permissions
264. Can you create IAM security credentials for existing users?
- A. Yes, existing users can have security credentials associated with their account.
- B. No, IAM requires that all users who have credentials set up are not existing users
- C. No, security credentials are created within GROUPS, and then users are associated to GROUPS at a later time.
- D. Yes, but only IAM credentials, not ordinary security credentials.
- FAQ: What problems does IAM solve?
IAM makes it easy to provide multiple users secure access to your AWS resources. IAM enables you to:- Manage IAM users and their access: You can create users in AWS’s identity management system, assign users individual security credentials (such as access keys, passwords, multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources. You can specify permissions to control which operations a user can perform.
Manage access for federated users: You can request security credentials with configurable expirations for users who you manage in your corporate directory, allowing you to provide your employees and applications secure access to resources in your AWS account without creating an IAM user account for them. You specify the permissions for these security credentials to control which operations a user can perform.
- Manage IAM users and their access: You can create users in AWS’s identity management system, assign users individual security credentials (such as access keys, passwords, multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources. You can specify permissions to control which operations a user can perform.
265. What does Amazon EC2 provide?
- A. Virtual servers in the Cloud.
- B. A platform to run code (Java, PHP, Python), paying on an hourly basis.
- C. Computer Clusters in the Cloud.
- D. Physical servers, remotely managed by the customer.
266. Amazon SWF is designed to help users…
- A. … Design graphical user interface interactions
- B. … Manage user identification and authorization
- C. … Store Web content
- D. … Coordinate synchronous and asynchronous tasks which are distributed and fault tolerant.
267. Can I control if and when MySQL based RDS Instance is upgraded to new supported versions?
- A. No
- B. Only in VPC
- C. Yes
268. If I modify a DB Instance or the DB parameter group associated with the instance, should I reboot the instance for the changes to take effect?
- A. No
- B. Yes
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html
- Some modifications, such as parameter group changes, require that you manually reboot your DB instance for the change to take effect.ImportantSome modifications result in an outage because Amazon RDS must reboot your DB instance for the change to take effect. Review the impact to your database and applications before modifying your DB instance settings.
269. When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes.
- A. Depends on the instance type
- B. FALSE
- C. Depends on whether you use API call
- D. TRUE
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html#bdm-instance-metadata
- Viewing the Instance Block Device Mapping for Instance Store Volumes
- When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes. You can use instance metadata to query the complete block device mapping. The base URI for all requests for instance metadata is
http://169.254.169.254/latest/
.
270. By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag_____ to false when you launch the instance
- A. DeleteOnTermination
- B. RemoveOnDeletion
- C. RemoveOnTermination
- D. TerminateOnDeletion
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#preserving-volumes-on-termination
- When an instance terminates, Amazon EC2 uses the value of the
DeleteOnTermination
attribute for each attached Amazon EBS volume to determine whether to preserve or delete the volume.
271. What are the initial settings of an user created security group?
- A. Allow all inbound traffic and Allow no outbound traffic
- B. Allow no inbound traffic and Allow no outbound traffic
- C. Allow no inbound traffic and Allow all outbound traffic
- D. Allow all inbound traffic and Allow all outbound traffic
272. Will my standby RDS instance be in the same Region as my primary?
- A. Only for Oracle RDS types
- B. Yes
- C. Only if configured at launch
- D. No
273. What does Amazon Elastic Beanstalk provide?
- A. A scalable storage appliance on top of Amazon Web Services.
- B. An application container on top of Amazon Web Services.
- C. A service by this name doesn’t exist.
- D. A scalable cluster of EC2 instances.
274. True or False: When using IAM to control access to your RDS resources, the key names that can be used are case sensitive. For example, aws:CurrentTime is NOT equivalent to AWS:currenttime.
- A. TRUE
- B. FALSE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAM.Conditions.html
275. What will be the status of the snapshot until the snapshot is complete.
- A. running
- B. working
- C. progressing
- D. pending
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
- Although you can take a snapshot of a volume while a previous snapshot of that volume is in the
pending
status, having multiplepending
snapshots of a volume may result in reduced volume performance until the snapshots complete.
276. Can we attach an EBS volume to more than one EC2 instance at the same time?
- A. No
- B. Yes.
- C. Only EC2-optimized EBS volumes.
- D. Only in read mode.
277. True or False: Automated backups are enabled by default for a new DB Instance.
- A. TRUE
- B. FALSE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
278. What does the AWS Storage Gateway provide?
- A. It allows to integrate on-premises IT environments with Cloud Storage.
- B. A direct encrypted connection to Amazon S3.
- C. It’s a backup solution that provides an on-premises Cloud storage.
- D. It provides an encrypted SSL endpoint for backups in the Cloud.
- http://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
279. Amazon RDS automated backups and DB Snapshots are currently supported for only the __________ storage engine
- A. InnoDB
- B. MyISAM
280. How many relational database engines does RDS currently support?
- A. Three: MySQL, Oracle and Microsoft SQL Server.
- B. Just two: MySQL and Oracle.
- C. Five: MySQL, PostgreSQL, MongoDB, Cassandra and SQLite.
- D. Just one: MySQL.
281. Fill in the blanks: The base URI for all requests for instance metadata is ___________
- A. http://254.169.169.254/latest/
- B. http://169.169.254.254/latest/
- C. http://127.0.0.1/latest/
- D. http://169.254.169.254/latest/
282. While creating the snapshots using the command line tools, which command should I be using?
- A. ec2-deploy-snapshot
- B. ec2-fresh-snapshot
- C. ec2-create-snapshot
- D. ec2-new-snapshot
283. Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.
- A. Incorrect
- B. Error
- C. FALSE
284. What are the two permission types used by AWS?
- A. Resource-based and Product-based
- B. Product-based and Service-based
- C. Service-based
- D. User-based and Resource-based
285. In the Amazon cloudwatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?
- A. FreeStorage
- B. FreeStorageSpace
- C. FreeStorageVolume
- D. FreeDBStorageSpace
286. Amazon RDS DB snapshots and automated backups are stored in
- A. Amazon S3
- B. Amazon ECS Volume
- C. Amazon RDS
- D. Amazon EMR
287. What is the maximum key length of a tag?
- A. 512 Unicode characters
- B. 64 Unicode characters
- C. 256 Unicode characters
- D. 128 Unicode characters
- http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html
- Maximum key length: 128 Unicode characters
- Maximum value length: 256 Unicode characters
288. Groups can’t _____.
- A. be nested more than 3 levels
- B. be nested at all
- C. be nested more than 4 levels
- D. be nested more than 2 levels
289. You must increase storage size in increments of at least _____ %
- A. 40
- B. 20
- C. 50
- D. 10
- http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html
290. Changes to the backup window take effect ______.
- A. from the next billing cycle
- B. after 30 minutes
- C. immediately
- D. after 24 hours
291. Using Amazon CloudWatch’s Free Tier, what is the frequency of metric updates which you receive?
- A. 5 minutes ()
- B. 500 milliseconds.
- C. 30 seconds
- D. 1 minute
- https://aws.amazon.com/cloudwatch/pricing/
-
You can get started with Amazon CloudWatch for free. Many applications should be able to operate within these free tier limits.
- New and existing customers also receive 3 dashboards of up to 50 metrics each per month at no additional charge
- Basic Monitoring metrics (at five-minute frequency) for Amazon EC2 instances are free of charge, as are all metrics for Amazon EBS volumes, Elastic Load Balancers, and Amazon RDS DB instances.
292. Which is the default region in AWS?
- A. eu-west-1
- B. us-east-1
- C. us-east-2
- D. ap-southeast-1
293. What are the Amazon EC2 API tools?
- A. They don’t exist. The Amazon EC2 AMI tools, instead, are used to manage permissions.
- B. Command-line tools to the Amazon EC2 web service.
- C. They are a set of graphical tools to manage EC2 instances.
- D. They don’t exist. The Amazon API tools are a client interface to Amazon Web Services.
294. What are the two types of licensing options available for using Amazon RDS for Oracle?
- A. BYOL and Enterprise License
- B. BYOL and License Included
- C. Enterprise License and License Included
- D. Role based License and License Included
- https://aws.amazon.com/rds/oracle/
- You can run Amazon RDS for Oracle under two different licensing models – “License Included” and “Bring-Your-Own-License (BYOL)”.
295. What does a “Domain” refer to in Amazon SWF?
- A. A security group in which only tasks inside can communicate with each other
- B. A special type of worker
- C. A collection of related Workflows
- D. The DNS record for the Amazon SWF service
- http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-domains.html
- Domains provide a way of scoping Amazon SWF resources within your AWS account
296. EBS Snapshots occur _____
- A. Asynchronously
- B. Synchronously
- C. Weekly
297. Disabling automated backups ______ disable the point-in-time recovery.
- A. if configured to can
- B. will never
- C. will
298. Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an _____________ node in the response from the Amazon RDS API.
- A. Incorrect
- B. Error
- C. FALSE
299. Is creating a Read Replica of another Read Replica supported?
- A. Only in certain regions
- B. Only with MSSQL based RDS
- C. Only for Oracle RDS types
- D. No
- FAQ: Can I create a Read Replica of another Read Replica?Amazon Aurora, Amazon RDS for MySQL and MariaDB: You can create a second-tier Read Replica from an existing first-tier Read Replica. By creating a second-tier Read Replica, you may be able to move some of the replication load from the master database instance to a first-tier Read Replica. Please note that a second-tier Read Replica may lag further behind the master because of additional replication latency introduced as transactions are replicated from the master to the first tier replica and then to the second-tier replica.Amazon RDS for PostgreSQL: Read Replicas of Read Replicas are not currently supported.
300. Can Amazon S3 uploads resume on failure or do they need to restart?
- A. Restart from beginning
- B. You can resume them, if you flag the “resume on failure” option before uploading.
- C. Resume on failure
- D. Depends on the file size
- https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-multipart-upload.html
- multi part upload
301. Which of the following cannot be used in Amazon EC2 to control who has access to specific Amazon EC2 instances?
- A. Security Groups
- B. IAM System (Not used in EC2)
- C. SSH keys
- D. Windows passwords
303. Fill in the blanks: _________ let you categorize your EC2 resources in different ways, for example, by purpose, owner, or environment.
- A. wildcards
- B. pointers
- C. Tags
- D. special filters
304. How can I change the security group membership for interfaces owned by other AWS, such as Elastic Load Balancing?
- A. By using the service specific console or API\CLI commands
- B. None of these
- C. Using Amazon EC2 API/CLI
- D. using all these methods
305. Out of the stripping options available for the EBS volumes, which one has the following disadvantage : ‘Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you’re mirroring all writes to a pair of volumes, limiting how much you can stripe.’ ?
- A. Raid 0
- B. RAID 1+0 (RAID 10)
- C. Raid 1
- D. Raid
306. What is the maximum write throughput I can provision for a single Dynamic DB table?
- A. 1,000 write capacity units
- B. 100,000 write capacity units
- C. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first.
- D. 10,000 write capacity units
- Q: What is the maximum throughput I can provision for a single DynamoDB table?DynamoDB is designed to scale without limits However, if you wish to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must first contact Amazon through this online form. If you wish to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account you must first contact us using the form described above.
307. What does the following command do with respect to the Amazon EC2 security groups? ec2-revoke RevokeSecurityGroupIngress
- A. Removes one or more security groups from a rule.
- B. Removes one or more security groups from an Amazon EC2 instance.
- C. Removes one or more rules from a security group.
- D. Removes a security group from our account
- http://docs.aws.amazon.com/cli/latest/reference/ec2/revoke-security-group-ingress.html
308. Can a ‘user’ be associated with multiple AWS accounts?
- A. No
- B. Yes
309. True or False: Manually created DB Snapshots are deleted after the DB Instance is deleted.
- A. TRUE
- B. FALSE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html
- Important
- If you choose not to create a final DB snapshot, you will not be able to later restore the DB instance to its final state. When you delete a DB instance, all automated backups are deleted and cannot be recovered. Manual DB snapshots of the instance are not deleted.
310. Can I move a Reserved Instance from one Region to another?
- A. No
- B. Only if they are moving into GovCloud
- C. Yes
- D. Only if they are moving to US East from another region
- https://aws.amazon.com/rds/faqs/
- Q: Can I move a reserved instance from one Region or Availability Zone to another?Each reserved instance is associated with a specific Region, which is fixed for the lifetime of the reservation and cannot be changed. Each reservation can, however, be used in any of the available AZs within the associated Region.
311. What is Amazon Glacier?
- A. You mean Amazon “Iceberg”: it’s a low-cost storage service.
- B. A security tool that allows to “freeze” an EBS volume and perform computer forensics on it.
- C. A low-cost storage service that provides secure and durable storage for data archiving and backup.
- D. It’s a security tool that allows to “freeze” an EC2 instance and perform computer forensics on it.
312. What is the durability of S3 RRS?
- A. 99.99%
- B. 99.95%
- C. 99.995%
- D. 99.999999999%
313. What does specifying the mapping /dev/sdc=none when launching an instance do?
- A. Prevents /dev/sdc from creating the instance.
- B. Prevents /dev/sdc from deleting the instance.
- C. Set the value of /dev/sdc to ‘zero’.
- D. Prevents /dev/sdc from attaching to the instance.
314. Is Federated Storage Engine currently supported by Amazon RDS for MySQL?
- A. Only for Oracle RDS instances
- B. No
- C. Yes
- D. Only in VPC
- The MySQL Federated storage engine for the MySQL relational database management system is a storage enginewhich allows a user to create a table that is a local representation of a foreign (remote) table.
- FAQ: What storage engines does Amazon RDS for MySQL support?The Point-In-Time-Restore and Snapshot Restore features of Amazon RDS for MySQL require a crash-recoverable storage engine and are supported for InnoDB storage engine only. While MySQL supports multiple storage engines with varying capabilities, not all of them are optimized for crash recovery and data durability. For example, MyISAM storage engine does not support reliable crash recovery and may result in lost or corrupt data when MySQL is restarted after a crash, preventing Point-In-Time-Restore or Snapshot restore from working as intended. However, if you still choose to use MyISAM with Amazon RDS, following these steps may be helpful in certain scenarios for DB snapshot restore functionality.Federated Storage Engine is currently not supported by Amazon RDS for MySQL.
315. Is there a limit to how many groups a user can be in?
- A. Yes for all users
- B. Yes for all users except root
- C. No
- D. Yes unless special permission granted
- http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html
Groups in an AWS account 100 Groups a user can be a member of 10
316. True or False: When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint.
- A. FALSE
- B. TRUE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html
- When you restore the DB instance, you provide the name of the DB snapshot to restore from, and then provide a name for the new DB instance that is created from the restore. You cannot restore from a DB snapshot to an existing DB instance; a new DB instance is created when you restore.
317. A/An _____ acts as a firewall that controls the traffic allowed to reach one or more instances.
- A. security group (Instance Level)
- B. ACL (VPC level)
- C. IAM
- D. Private IP Addresses
318. Will my standby RDS instance be in the same Availability Zone as my primary?
- A. Only for Oracle RDS types
- B. Yes
- C. Only if configured at launch
- D. No
- https://aws.amazon.com/rds/details/multi-az/?nc1=h_ls
- Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ)
- https://aws.amazon.com/rds/faqs/
- Q: Will my standby be in the same Region as my primary?Yes. Your standby is automatically provisioned in a different Availability Zone of the same Region as your DB instance primary.
319. While launching an RDS DB instance, on which page I can select the Availability Zone?
- A. REVIEW
- B. DB INSTANCE DETAILS
- C. MANAGEMENT OPTIONS
- D. ADDITIONAL CONFIGURATION (Configure Advanced Settings)
320. What does the following command do with respect to the Amazon EC2 security groups?
ec2-create-group CreateSecurityGroup
- A. Groups the user created security groups in to a new group for easy access.
- B. Creates a new security group for use with your account.
- C. Creates a new group inside the security group.
- D. Creates a new rule inside the security group.
321. In the Launch Db Instance Wizard, where can I select the backup and maintenance options?
- A. Under DB INSTANCE DETAILS
- B. Under REVIEW
- C. Under MANAGEMENT OPTIONS
- D. Under ENGINE SELECTION
322. What happens to the data on an instance if the instance reboots (intentionally or unintentionally)?
- A. Data will be lost
- B. Data persists
- C. Data may persist however cannot be sure
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
- Instance Store Lifetime
- You can specify instance store volumes for an instance only when you launch it. You can’t detach an instance store volume from one instance and attach it to a different instance.
- The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:
- The underlying disk drive fails
- The instance stops
- The instance terminates
323. How many types of block devices does Amazon EC2 support ?
- A. 2
- B. 3
- C. 4
- D. 1
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
- Amazon EC2 supports two types of block devices:
- Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance)EBS volumes (remote storage devices)
324. Provisioned IOPS Costs: you are charged for the IOPS and storage whether or not you use them in a given month.
- A. FALSE
- B. TRUE
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Overview.ProvisionedIOPS-cost
Provisioned IOPS Storage Costs
Because Provisioned IOPS storage reserves resources for your use, you are charged for the resources whether or not you use them in a given month. When you use Provisioned IOPS storage, you are not charged the monthly Amazon RDS I/O charge. If you prefer to pay only for I/O that you consume, a DB instance that uses magnetic storage may be a better choice. For Amazon RDS pricing information, see the Amazon RDS product page.
325. IAM provides several policy templates you can use to automatically assign permissions to the groups you create. The _____ policy template gives the Admins group permission to access all account resources, except your AWS account information
- A. Read Only Access
- B. Power User Access
- C. AWS Cloud Formation Read Only Access
- D. Administrator Access
- https://forums.aws.amazon.com/servlet/JiveServlet/download/24-112161-408433-8153/AWS%20User%20Permissions.pdf
- Administrator Access Provides full access to AWS services and resources.
- Power User Access Provides full access to AWS services and resources, but does not allow management of Users and groups.
326. While performing the volume status checks, if the status is insufficient-data, what does it mean?
- A. the checks may still be in progress on the volume
- B. the check has passed
- C. the check has failed
327. IAM’s Policy Evaluation Logic always starts with a default ____________ for every request, except for those that use the AWS account’s root security credentials b
- A. Permit
- B. Deny
- C. Cancel
328. By default, when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the _____ Service to set the drive letters of the EBS volumes per your specifications.
- A. EBSConfig Service
- B. AMIConfig Service
- C. Ec2Config Service
- D. Ec2-AMIConfig Service
- http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/device_naming.html
- By default, when an EBS volume is attached to a Windows instance, it can show up as any drive letter on the instance. You can change the settings of the Ec2Config service to set the drive letters of the EBS volumes per your specifications
329. For each DB Instance class, what is the maximum size of associated storage capacity?
- A. 5GB
- B. 1TB
- C. 2TB
- D. 500GB
- E. 6TB
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
Amazon RDS Storage Types
You can create MySQL, MariaDB, PostgreSQL, and Oracle RDS DB instances with up to 6TB of storage and SQL Server RDS DB instances with up to 4TB of storage when using the Provisioned IOPS and General Purpose (SSD) storage types
330. SQL Server __________ store logins and passwords in the master database.
- A. can be configured to but by default does not
- B. doesn’t
- C. does
331. What is Oracle SQL Developer?
- A. An AWS developer who is an expert in Amazon RDS using both the Oracle and SQL Server DB engines
- B. A graphical Java tool distributed without cost by Oracle.
- C. It is a variant of the SQL Server Management Studio designed by Microsoft to support Oracle DBMS functionalities
- D. A different DBMS released by Microsoft free of cost
- http://www.oracle.com/technetwork/developer-tools/sql-developer/what-is-sqldev-093866.html
- Oracle SQL Developer is the Oracle Database IDE. A free graphical user interface, Oracle SQL Developer allows database users and administrators to do their database tasks in fewer clicks and keystrokes. A productivity tool, SQL Developer’s main objective is to help the end user save time and maximize the return on investment in the Oracle Database technology stack
332. Does Amazon RDS allow direct host access via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection?
- A. Yes
- B. No
- C. Depends on if it is in VPC or not
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.html
- NoteAmazon RDS supports access to databases using any standard SQL client application. Amazon RDS does not allow direct host access.
333. To view information about an Amazon EBS volume, open the Amazon EC2 console at https://console.aws.amazon.com/ec2/, click __________ in the Navigation pane.
- A. EBS
- B. Describe
- C. Details
- D. Volumes
334. You must increase storage size in increments of at least __________ %
- A. 40
- B. 30
- C. 10
- D. 20
335. Using Amazon IAM, can I give permission based on organizational groups?
- A. Yes but only in certain cases
- B. No
- C. Yes always
336. While creating the snapshots using the API, which Action should I be using?
- A. MakeSnapShot
- B. FreshSnapshot
- C. DeploySnapshot
- D. CreateSnapshot
337. What is an isolated database environment running in the cloud (Amazon RDS) called?
- A. DB Instance
- B. DB Server
- C. DB Unit
- D. DB Volume
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.html
- A DB instance is an isolated database environment running in the cloud
338. While signing in REST/ Query requests, for additional security, you should transmit your requests using Secure Sockets Layer (SSL) by using _________
- A. HTTP
- B. Internet Protocol Security(IPsec)
- C. TLS (Transport Layer Security)
- D. HTTPS
339. What happens to the I/O operations while you take a database snapshot?
- A. I/O operations to the database are suspended for a few minutes while the backup is in progress.
- B. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress.
- C. I/O operations will be functioning normally
- D. I/O operations to the database are suspended for an hour while the backup is in progress
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html
- Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. Creating this DB snapshot on a Single-AZ DB instance results in a brief I/O suspension that can last from a few seconds to a few minutes, depending on the size and class of your DB instance. Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby.
340. Read Replicas require a transactional storage engine and are only supported for the _________ storage engine
- A. OracleISAM
- B. MSSQLDB
- C. InnoDB
- D. MyISAM
341. When running my DB Instance as a Multi-AZ deployment, can I use the standby for read or write operations?
- A. Yes
- B. Only with MSSQL based RDS
- C. Only for Oracle RDS instances
- D. No
342. When should I choose Provisioned IOPS over Standard RDS storage?
- A. If you have batch-oriented workloads
- B. If you use production online transaction processing (OLTP) workloads.
- C. If you have workloads that are not sensitive to consistent performance
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
- Provisioned IOPS storage is optimized for I/O intensive, online transaction processing (OLTP) workloads that have consistent performance requirements. Provisioned IOPS helps performance tuning.
343. In the ‘Detailed’ monitoring data available for your Amazon EBS volumes, Provisioned IOPS volumes automatically send _____ minute metrics to Amazon CloudWatch.
- A. 3
- B. 1
- C. 5
- D. 2
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html#using_cloudwatch_ebs
Type Description Basic Data is available automatically in 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances. Detailed Provisioned IOPS SSD ( io1
) volumes automatically send one-minute metrics to CloudWatch.
344. What is the minimum charge for the data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone?
- A. USD 0.10 per GB
- B. No charge. It is free.
- C. USD 0.02 per GB
- D. USD 0.01 per GB
345. Are Reserved Instances available for Multi-AZ Deployments?
- A. Only for Cluster Compute instances
- B. Yes for all instance types
- C. Only for M3 instance types
- D. No
- Q: Are reserved instances available for Multi-AZ deployments?Yes. When you call the DescribeReservedDBInstancesOfferings API or describe-reserved-db-instances-offerings command, simply look for the Multi-AZ options listed among the DB Instance configurations available for purchase. If you want to purchase a reservation for a DB instance with synchronous replication across multiple Availability Zones, specify one of these offerings in your PurchaseReservedDBInstancesOffering call.
346. Which service enables AWS customers to manage users and permissions in AWS?
- A. AWS Access Control Service (ACS)
- B. AWS Identity and Access Management (IAM)
- C. AWS Identity Manager (AIM)
347. Which Amazon Storage behaves like raw, unformatted, external block devices that you can attach to your instances?
- A. None of these.
- B. Amazon Instance Storage
- C. Amazon EBS
- D. All of these
348. Which Amazon service can I use to define a virtual network that closely resembles a traditional data center?
- A. Amazon VPC
- B. Amazon ServiceBus
- C. Amazon EMR
- D. Amazon RDS
349. Fill in the blanks : _____ let you categorize your EC2 resources in different ways, for example, by purpose, owner, or environment.
- A. Tags
- B. special filters
- C. pointers
- D. functions
350. What is the command line instruction for running the remote desktop client in Windows?
- A. desk.cpl
- B. mstsc
351. Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine
- A. MyISAM
- B. InnoDB
352. MySQL installations default to port _____.
- A. 3306
- B. 443
- C. 80
- D. 1158
353. If you have chosen Multi-AZ deployment, in the event of a planned or unplanned outage of your primary DB Instance, Amazon RDS automatically switches to the standby replica. The automatic failover mechanism simply changes the ______ record of the main DB Instance to point to the standby DB Instance.
- A. DNAME
- B. CNAME
- C. TXT
- D. MX
354. If I want to run a database in an Amazon instance, which is the most recommended Amazon storage option?
- A. Amazon Instance Storage
- B. Amazon EBS
- C. You can’t run a database inside an Amazon instance.
- D. Amazon S3
355. In regards to IAM you can edit user properties later, but you cannot use the console to change the ___________.
- A. user name
- B. password
- C. default group
356. Can I test my DB Instance against a new version before upgrading?
- A. No
- B. Yes
- C. Only in VPC
366. True or False: If you add a tag that has the same key as an existing tag on a DB Instance, the new value overwrites the old value.
- A. FALSE
- B. TRUE
367. Can I use Provisioned IOPS with VPC?
- A. Only Oracle based RDS
- B. No
- C. Only with MSSQL based RDS
- D. Yes for all RDS instances
368. Making your snapshot public shares all snapshot data with everyone. Can the snapshots with AWS Marketplace product codes be made public?
- A. No
- B. Yes
370. Fill in the blanks: “To ensure failover capabilities, consider using a _____ for incoming traffic on a network interface”.
- A. primary public IP
- B. secondary private IP
- C. secondary public IP
- D. add on secondary IP
371. If I have multiple Read Replicas for my master DB Instance and I promote one of them, what happens to the rest of the Read Replicas?
- A. The remaining Read Replicas will still replicate from the older master DB Instance
- B. The remaining Read Replicas will be deleted
- C. The remaining Read Replicas will be combined to one read replica
372. What does Amazon CloudFormation provide?
- A. The ability to setup Autoscaling for Amazon EC2 instances.
- B. None of these.
- C. A templated resource creation for Amazon Web Services.
- D. A template to map network resources for Amazon Web Services.
373. Can I encrypt connections between my application and my DB Instance using SSL?
- A. No
- B. Yes
- C. Only in VPC
- D. Only in certain regions
374. What are the four levels of AWS Premium Support?
- A. Basic, Developer, Business, Enterprise
- B. Basic, Startup, Business, Enterprise
- C. Free, Bronze, Silver, Gold
- D. All support is free
375. What can I access by visiting the URL: http://status.aws.amazon.com/?
- A. Amazon Cloud Watch
- B. Status of the Amazon RDS DB
- C. AWS Service Health Dashboard
- D. AWS Cloud Monitor
376.
An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an
Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data
on an Amazon EBS volume?
Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data
on an Amazon EBS volume?
A.
Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM.
Re- mount the Amazon EBS volume.
Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM.
Re- mount the Amazon EBS volume.
B.
Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old
Amazon EBS volume.
Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old
Amazon EBS volume.
C.
Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume.
Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume.
D.
Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS
volume. Mount the Amazon EBS volume.
Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS
volume. Mount the Amazon EBS volume.
377
378.
379.
380.
1. In RDS, you are responsibly for maintaining the OS, application security patching, antivirus, etc.
- A True
- B. False
- https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
- for EC2 instances, you’re responsible for management of the guest OS (including updates and security patches), any application software or utilities you install on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance. These are basically the same security tasks that you’re used to performing no matter where your servers are located.Amazon RDS or Amazon Redshift provide all of the resources you need in order to perform a specific task—but without the configuration work that can come with them. With managed services, you DON’T have to worry about launching and maintaining instances, patching the guest OS or database, or replicating databases—AWS handles that for you. But as with all services, you should protect your AWS Account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each of your users has their own credentials and you can implement segregation of duties. We also recommend using multi-factor authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with your AWS resources, and setting up API/user activity logging with AWS CloudTrail. For more information about additional measures you can take, refer to the AWS Security Best Practices whitepaper and recommended reading on the AWS Security Resources webpage.
2. In RDS, what is the maximum value I can set for my backup retention period?
- A 15 Days
- B. 30 Days
- C. 35 Days
- D. 45. Days
3. In RDS, what is the maximum size for a Microsoft SQL Server DB with SQL Server Express edition?
- 1TB per Database
- 4TB per Database
- 300GB per Database
- 10GB per Database
- There are two different limits — that of the DB (10GB), and that of the DB instance server storage (300GB). A DB server instance could quite easily host several DBs, or a DB and support files such as logs, dumps, and flat file backups. Please see the AWS documentation for full details.Further information:https://d0.awsstatic.com/whitepapers/rdbms-in-the-cloud-sql-server-on-aws.pdfhttp://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.htmlSQL Server Express Edition limits storage to 10GB/database, limiting overall useable database storage to 300GB per DB instance (max. 30 databases/instance allowed). Any additional storage may be unusable
4. What is the underlying Hypervisor for EC2?
- Hyper-V
- ESX
- Xen
- OVM
5. The AWS platform is certified PCI DSS Level 1 Certified.
- True
- False
6. The AWS platform consists of how many regions?
- The AWS Cloud operates 42 Ads within 16 geographic Regions around
7. How many copies of my data does RDS – Aurora store by default?
- 6
8. Due to international monetary regulations issued by the IMF, a large multi-national banking organization requires that all their Australian customers’ data must not leave the Australian jurisdiction. Similarly, all Japanese customers’ data may not leave the Japanese jurisdiction without explicit permission from the IMF. While registering, a user must include their residential address as part of their user profile. What steps should be taken to enforce these regulations on a web-based application running on EC2?
- Deploy your application on multiple EC2 instances in multiple regions. Then, use Route 53s latency-based routing capabilities to route traffic to the appropriate region based on a users latency.
- Deploy your application on EC2 instances in multiple regions, and then use an elastic load balancer with session stickiness to route traffic to the appropriate region based on a users profile country.
- Due to the strict regulations, you should use a third party data provider to verify the users location based on their profile. It would not be appropriate to rely on latency based routing, as this would not always be 100% accurate.
- Run Amazon EC2 instances in multiple AWS Availability Zones in a single region, and leverage an elastic load balancer with session stickiness to route traffic to the appropriate zone based on a users profile.
9. You are hosting a MySQL database on the root volume of an EC2 instance. The database is using a large number of IOPS, and you need to increase the number of IOPS available to it. What should you do?
- Migrate the database to an S3 bucket.
- Migrate the database to Glacier.
- Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes.
- Use Cloud Front to cache the database.
10. Amazon S3 buckets in all regions other than “US Standard” provide read-after-write consistency for PUTS of new objects.
- True
- False
- FAQ. Wasn’t there a US Standard region?
We renamed the US Standard Region to US East (Northern Virginia) Region to be consistent with AWS regional naming conventions. There is no change to the endpoint and you do not need to make any changes to your application.
10. Amazon’s Redshift uses which block size for its columnar storage?
- 2KB
- 8KB
- 16KB
- 1024KB / 1MB
11. Placement Groups can be created across 2 or more Availability Zones.
- True
- False
12. You are a systems administrator and you need to monitor the health of your production environment. You decide to do this using Cloud Watch. However, you notice that you cannot see the health of every important metric in the default dashboard. When monitoring the health of your EC2 instances, for which of the following metrics do you need to design a custom CloudWatch metric?
- CPU Usage
- Memory usage
- Disk read operations
- Network in
13. You work for a toy company that has a busy online store. As you are approaching Christmas, you find that your store is getting more and more traffic. You ensure that the web tier of your store is behind an Auto Scaling group. However, you notice that the web tier is frequently scaling, sometimes multiple times in an hour, only to scale back after peak usage. You need to keep Auto Scaling from scaling up and down so rapidly. Which of the following options would help you to achieve this?
- Configure Auto Scaling to terminate your oldest instances first, then adjust your CloudWatch alarm.
- Configure Auto Scaling to terminate your newest instances first, then adjust your CloudWatch alarm.
- Change your Auto Scaling policy so that it only scales at scheduled times.
- Modify the Auto Scaling group cool-down timers & modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.
- http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html
- The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect.
- ImportantCooldown periods are not supported for step scaling policies or scheduled scaling.
14. You’ve been tasked with building a new application with a stateless web tier for a company that produces reuseable rocket parts. Which three services could you use to achieve this?
- AWS Storage Gateway, ElastiCache, and ELB
- ELB, ElastiCache, and RDS
- Cloudwatch, RDS, and DynamoDb
- RDS, DynamoDB, and ElastiCache
15. Your company has decided to set up a new AWS account for test and dev purposes. They already use AWS for production, but would like a new account dedicated for test and dev so as to not accidentally break the production environment. You launch an exact replica of your production environment using a CloudFormation template that your company uses in production. However, CloudFormation fails. You use the exact same CloudFormation template in production, so the failure is something to do with your new AWS account. The CloudFormation template is trying to launch 60 new EC2 instances in a single availability zone. After some research you discover that the problem is ________.
- For all new AWS accounts, there is a soft limit of 20 EC2 instances per region. You should submit the limit increase form and retry the template after your limit has been increased.
- For all new AWS accounts, there is a soft limit of 20 EC2 instances per availability zone. You should submit the limit increase form and retry the template after your limit has been increased.
- You cannot launch more than 20 instances in your default VPC. Instead, reconfigure the CloudFormation template to provision the instances in a custom VPC.
- Your CloudFormation template is configured to use the parent account and not the new account. Change the account number in the CloudFormation template and relaunch the template.
16. You work for a cosmetic company which has their production website on AWS. The site itself is in a two-tier configuration with web servers in the front end and database servers at the back end. The site uses using Elastic Load Balancing and Auto Scaling. The databases maintain consistency by replicating changes to each other as and when they occur. This requires the databases to have extremely low latency. Your website needs to be highly redundant and must be designed so that if one availability zone goes offline and Auto Scaling cannot launch new instances in the remaining Availability Zones, the site will not go offline. How can the current architecture be enhanced to ensure this?
CorrectDeploy your site in three different AZ’s within the same region. Configure the Auto Scaling minimum to handle 50 percent of the peak load per zone.
Deploy your website in 2 different regions. Configure Route53 with a failover routing policy, and set up health checks on the primary site.
Deploy your site in three different AZ’s within the same region. Configure the Auto Scaling minimum to handle 33 percent of the peak load per zone.
Deploy your website in 2 different regions. Configure Route53 with Weighted Routing. Assign a weight of 25% to region 1 and a weight of 75% to region 2.
EC2
Your client has been experiencing problems with his aging in-house infrastructure, and is extremely concerned about managing the cost of maintaining his online presence. After deciding that the cost of developing a sound DR plan more than makes up for the negative impact of being off-line, the board has directed you to prepare a proposal that achieves an RTO of 20 hours, an RPO of 1 hour, and keeps the costs of meeting those target time-windows to a minimum. They have also mandated the use of the AWS Storage Gateway to mitigate the risk associated with a catastrophic NAS failure. Which of the following solutions best meet the requirements?
Provide their engineering staff with an AWS account, and ask them to rebuild all the servers in AWS to form a fully functional Hot Standby environment. Use Storage Gateway to copy the data on the NAS to S3 so that it can be accessed by the database servers.
IncorrectProvide their engineering staff with an AWS account. Create a small, under-sized DR DB instance and use application synchronization to keep the DR instance synchronized within 30 seconds of the production instance. Build one of each web/app server and keep these patched and on-line. Provide the Ops team with written instructions explaining how to upgrade the DB host to a full sized instance within 20 minutes.
Correct AnswerWork with the customer’s engineers to identify the key servers and data. Help them setup an AWS account with IAM users, groups, and roles. Build templates of the critical web/app servers and save these as AMIs. Agree upon RDS specifications that meet the stated requirements. Set up the Storage Gateway and the Snapshot schedule to meet the RPO. Document, script, or automate the steps to initiate the RDS instance, the EC2 instances, the steps to restore the latest data from the Storage Gateway snapshots into RDS, plus any DNS changes. Test the process with each of the Operations team shifts.
Provide their engineering staff with an AWS account and create IAM Users, Groups, & Roles. Use CloudFormation/CloudFormer to clone the existing on-premises environment and store the build scripts in GitHub. Migrate the in-house DNS to Route53 to simplify cut-over. Set up the Storage Gateway and the Snapshot schedule to meet the RPO. Provide the engineering team with a CLI script to kick-start the CloudFormation build and restoration of the StorageGateway snapshots.
Sorry!
There are three key aspects: RTO, RPO, and cost. All three must be balanced and meet objectives for the design to be considered acceptable.Further information:https://aws.amazon.com/storagegateway/https://aws.amazon.com/blogs/aws/new-whitepaper-use-aws-for-disaster-recovery/https://aws.amazon.com/developertools/6460180344805680
START OVERCOMPLETE QUIZ
At the monthly product meeting, one of the Product Owners proposes an idea to address an immediate shortcoming of the product system: storing a copy of the customer price schedule in the customer record in the database. You know that you can store large text or binary objects in DynamoDB. You give a tentative OK to do a Minimal Viable Product test, but stipulate that it must comply with the size limitation on the Attribute Name & Value. Which is the correct limitation?
IncorrectThe Name must not exceed 64 KB and the Value must not exceed 500 KB.
The combined Value and Name combined must not exceed 500 KB.
The Name must not exceed 64 KB and the Value must not exceed 400 KB.
Correct AnswerThe combined Value and Name combined must not exceed 400 KB.
The Name must not exceed 64 KB and the Value must not exceed 255 KB.
The combined Value and Name combined must not exceed 255 KB.
Sorry!
DynamoDB allows for the storage of large text and binary objects, but there is a limit of 400 KB.Further information:http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html
Your company provides an online image recognition service and uses SQS to decouple system components. Your EC2 instances poll the image queue as often as possible to keep end-to-end throughput as high as possible, but you realize that all this polling is resulting in both a large number of CPU cycles and skyrocketing costs. How can you reduce cost without compromising service?
Correct AnswerEnable long polling by setting the ReceiveMessageWaitTimeSeconds to a number > 0.
Enable short polling by setting the ReceiveMessageWaitTimeMinutes to a number > 0.
Enable short polling by setting the ReceiveMessageWaitTimeSeconds to a number > 0.
IncorrectEnable long polling by setting the ReceiveMessageWaitTimeMinutes to a number > 0.
Sorry!
SQS long polling doesn’t return a response until a message arrives in the queue, reducing your overall cost over time. Short polling WILL return empty responses.Further information:https://aws.amazon.com/sqs/faqs/
START OVERGO TO NEXT QUESTION
The risk with spot instances is that you are not guaranteed use of the resource for as long as you might want. Which of the following are scenarios under which AWS might execute a forced shutdown? (Choose 4)
Correct AnswerAWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
IncorrectAWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
CorrectAWS sends a notification of termination but you do not receive it within the 120 seconds and the instance is shutdown.
CorrectAWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS do not action the shutdown.
CorrectAWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the normal lease expired before the forced shutdown.
AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
Sorry!
notification of spot terminationFurther information:https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination
START OVERGO TO NEXT QUESTION
To establish a successful site-to-site VPN connection from your on-premise network to an AWS Virtual Private Cloud, which of the following must be configured? (Choose 3)
CorrectAn on-premise Customer Gateway
CorrectA Virtual Private Gateway
Correct AnswerA VPC with Hardware VPN Access
IncorrectA private subnet in your VPC
A NAT instance
Sorry!
You must have a VPC with Hardware VPN Access, an on-premise Customer Gateway, and a Virtual Private Gateway to make the VPN connection work.Further information:http://docs.aws.amazon.com/AmazonVPC/latest/NetworkAdminGuide/Introduction.html#CustomerGatewayConfiguration
START OVERGO TO NEXT QUESTION
On which of the following does the AWS Trusted Adviser service offer advice? (Choose 2)
Correct AnswerWhether there is MFA configure on the Root Account
IncorrectVulnerability scans on existing VPCs
IncorrectAntivirus protection on EC2 instances
Correct AnswerAdvice on security groups and what ports have unrestricted access
Sorry!
The correct answers are whether there is MFA configure on the Root Account and advice on security groups and what ports have unrestricted access.Further information:https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/#security
START OVERGO TO NEXT QUESTION
You have three AWS accounts (A, B & C) which share data. In an attempt to maximize performance between the accounts, you deploy the instances owned by these three accounts in ‘eu-west-1b’. During testing, you find inconsistent results in transfer latency between the instances. Transfer between accounts A and B is excellent, but transfers between accounts B and C, and C and A, are slower. What could be the problem ?
Correct AnswerThe names of the AZs are randomly applied, so “eu-west-1b” is not necessarily the same physical location for all three accounts.
You have incorrectly configured the cross-account authentication policies in Account C, adding latency to those instances.
Account C has been allocated to an older section of the Data Hall with slower networking.
IncorrectThe instances for Account C are on an overloaded Host. Stop all the Account C instances and then start them together so that they run an a new host.
You have accidentally set up account C in “us-west-1b”.
Sorry!
Availability Zone names are unique per account and do not represent a specific set of physical resources.Further information:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
START OVERGO TO NEXT QUESTION
Which of the following strategies does AWS use to deliver the promised levels of DynamoDB performance? (Choose 2)
Correct AnswerData is stored on Solid State Disks.
AWS deploys Read Replicas of the database to balance the load.
Correct AnswerThe Database is partitioned across a number of instances.
IncorrectDynamoDB instances can be configured with EBS-Optimised connections.
IncorrectAWS deploy caching instances in front of the DynamoDB cluster.
Sorry!
there is no magic, just fast hardware and dynamic DB sharding. .Further information:https://aws.amazon.com/dynamodb/faqs/
You are running a media rich website with a global audience in US-EAST-1 for a customer in the publishing industry. The website updates every 20 minutes. The web-tier of the site sits on three EC2 instances inside an Auto Scaling Group. The Auto Scaling group is configured to scale when CPU utilization of the instances is greater than 70%. The Auto Scaling group sits behind an Elastic Load Balancer, and your static content lives in S3 and is distributed globally by CloudFront. Your RDS database is an db.r3.8xlarge instance. Cloudwatch metrics show that your RDS instance usually has around 2GB of memory free, and an average CPU utilization of 75%. Currently, it is taking your users in Japan and Australia approximately 3 – 5 seconds to load your website, and you have been asked to help reduce these load-times. How might you improve your page load times? (Choose 3)
IncorrectChange your Auto Scaling Group so that it will scale when CPU Utilization is only 50%, rather than 70%.
Correct AnswerSet up a clone of your production environment in the Asia Pacific region and configure latency based routing on Route53.
CorrectUse ElastiCache to cache the most commonly accessed DB queries.
CorrectSetup CloudFront with dynamic content support to enable the caching of re-usable content from the media rich website.
Upgrade the RDS instance to a higher memory instance.
Sorry!
Your RDS instance is already the largest currently offered by AWS so you cannot upgrade this further. Changing your autoscaling policies will not help improve performance times as it is much more likely that the performance issue is with the database back end rather than the front end.
From https://zeroleeblog.wordpress.com/2017/04/27/aws-certification-qa/