Filters
Question type

Study Flashcards

A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL. The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop. How should the Database Specialist edit the script to fix this issue?


A) Stop the source instances before stopping their read replicas
B) Delete each read replica before stopping its corresponding source instance
C) Stop the read replicas before stopping their source instances
D) Use the AWS CLI to stop each read replica and source instance at the same

E) C) and D)
F) None of the above

Correct Answer

verifed

verified

A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users' devices read the latest statuses of their teammates from the table using the BatchGetltemn operation. Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation. Which recommendation would resolve this issue?


A) Ensure the DynamoDB table is configured to be always consistent.
B) Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
C) Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
D) Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.

E) C) and D)
F) B) and C)

Correct Answer

verifed

verified

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs. What should the company do to address this space constraint issue?


A) Log in to the host and run the rm $PGDATA/pg_logs/* command
B) Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
C) Create a ticket with AWS Support to have the logs deleted
D) Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

E) A) and B)
F) All of the above

Correct Answer

verifed

verified

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas. How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?


A) Set the TCP keepalive parameters low
B) Call the AWS CLI failover-db-cluster command
C) Enable Enhanced Monitoring on the DB cluster
D) Start a database activity stream on the DB cluster

E) A) and C)
F) All of the above

Correct Answer

verifed

verified

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL) . The Database Specialist has restored the latest backup to a new table. To prepare the new table with identical settings, which steps should be performed? (Choose two.)


A) Re-create global secondary indexes in the new table
B) Define IAM policies for access to the new table
C) Define the TTL settings
D) Encrypt the table from the AWS Management Console or use the update-table command
E) Set the provisioned read and write capacity

F) C) and D)
G) A) and E)

Correct Answer

verifed

verified

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly. What should a database specialist do to resolve this issue while minimizing access to external resources?


A) Add a route to an internet gateway in the subnet's route table.
B) Add a route to a NAT gateway in the subnet's route table.
C) Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.
D) Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet's route table.

E) A) and B)
F) B) and C)

Correct Answer

verifed

verified

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games' geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions. Which solution meets these requirements?


A) Amazon RDS for MySQL with multi-Region read replicas
B) Amazon Aurora global database
C) Amazon RDS for Oracle with GoldenGate
D) Amazon DynamoDB global tables

E) B) and C)
F) A) and B)

Correct Answer

verifed

verified

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned. Which solution will enable this change?


A) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack's mappings.
B) Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
C) Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
D) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

E) B) and D)
F) A) and C)

Correct Answer

verifed

verified

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements. Which combination of actions should a database specialist take to meet these requirements? (Choose two.)


A) Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS) . Then promote the replica to master.
B) Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.
C) Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.
D) Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.
E) Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

F) A) and E)
G) B) and E)

Correct Answer

verifed

verified

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application's cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time. Which approach will meet these requirements?


A) Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
B) Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
C) Use Amazon DynamoDB in on-demand capacity mode.
D) Use Amazon S3 and load the data from flat files.

E) A) and D)
F) C) and D)

Correct Answer

verifed

verified

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application. Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?


A) Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B) Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C) Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D) Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

E) A) and D)
F) None of the above

Correct Answer

verifed

verified

After restoring an Amazon RDS snapshot from 3 days ago, a company's Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?


A) The restored DB instance does not have Enhanced Monitoring enabled
B) The production DB instance is using a custom parameter group
C) The restored DB instance is using the default security group
D) The production DB instance is using a custom option group

E) A) and B)
F) All of the above

Correct Answer

verifed

verified

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective. Which approach should the Database Specialist take?


A) Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp) . Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.
B) Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
C) Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
D) Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

E) C) and D)
F) A) and C)

Correct Answer

verifed

verified

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level. How can the Database Specialist meet these requirements?


A) Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
B) Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
C) Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
D) Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

E) All of the above
F) A) and B)

Correct Answer

verifed

verified

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit. Which approach has the least risk and the highest likelihood of a successful data transfer?


A) Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
B) Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.
C) Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.
D) Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

E) None of the above
F) All of the above

Correct Answer

verifed

verified

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime. Which solution would meet these requirements?


A) Create a snapshot of the old databases and restore the snapshot with the required storage
B) Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
C) Create a new database using native backup and restore
D) Create a new read replica and make it the primary by terminating the existing primary

E) A) and B)
F) A) and C)

Correct Answer

verifed

verified

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season. Which solution will meet these requirements at the lowest cost?


A) DynamoDB Streams
B) DynamoDB with DynamoDB Accelerator
C) DynamoDB with on-demand capacity mode
D) DynamoDB with provisioned capacity mode with Auto Scaling

E) C) and D)
F) A) and D)

Correct Answer

verifed

verified

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message: "Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied." The developers need to load this data soon, so a database specialist must act quickly to solve this issue. What is the MOST secure solution?


A) Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
B) Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
C) Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
D) Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

E) A) and B)
F) A) and C)

Correct Answer

verifed

verified

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant. Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?


A) Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
B) Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
C) Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
D) Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

E) B) and C)
F) A) and C)

Correct Answer

verifed

verified

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours. The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs. What should a database specialist do to meet these requirements?


A) Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.
B) Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.
C) Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.
D) Use on-demand capacity.

E) A) and B)
F) A) and C)

Correct Answer

verifed

verified

Showing 21 - 40 of 156

Related Exams

Show Answer