Summer Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpm65

Professional-Cloud-Database-Engineer Google Cloud Certified - Professional Cloud Database Engineer Questions and Answers

Questions 4

You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?

Options:

A.

Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.

B.

Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.

C.

Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.

D.

Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.

Buy Now
Questions 5

Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud. What should you do?

Options:

A.

Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).

B.

Migrate applications and Oracle databases to Compute Engine.

C.

Migrate applications to Cloud SQL.

D.

Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).

Buy Now
Questions 6

Your company is using Cloud SQL for MySQL with an internal (private) IP address and wants to replicate some tables into BigQuery in near-real time for analytics and machine learning. You need to ensure that replication is fast and reliable and uses Google-managed services. What should you do?

Options:

A.

Develop a custom data replication service to send data into BigQuery.

B.

Use Cloud SQL federated queries.

C.

Use Database Migration Service to replicate tables into BigQuery.

D.

Use Datastream to capture changes, and use Dataflow to write those changes to BigQuery.

Buy Now
Questions 7

Your team is building a new inventory management application that will require read and write database instances in multiple Google Cloud regions around the globe. Your database solution requires 99.99% availability and global transactional consistency. You need a fully managed backend relational database to store inventory changes. What should you do?

Options:

A.

Use Bigtable.

B.

Use Firestore.

C.

Use Cloud SQL for MySQL

D.

Use Cloud Spanner.

Buy Now
Questions 8

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

Options:

A.

Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.

B.

Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.

C.

Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.

D.

Shut down your existing Cloud SQL for MySQL instance, and enable HA.

Buy Now
Questions 9

Your organization stores marketing data such as customer preferences and purchase history on Bigtable. The consumers of this database are predominantly data analysts and operations users. You receive a service ticket from the database operations department citing poor database performance between 9 AM-10 AM every day. Theapplication team has confirmed no latency from their logs. A new cohort of pilot users that is testing a dataset loaded from a third-party data provider is experiencing poor database performance. Other users are not affected. You need to troubleshoot the issue. What should you do?

Options:

A.

Isolate the data analysts and operations user groups to use different Bigtable instances.

B.

Check the Cloud Monitoring table/bytes_used metric from Bigtable.

C.

Use Key Visualizer for Bigtable.

D.

Add more nodes to the Bigtable cluster.

Buy Now
Questions 10

Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a degradation in database performance. You need to identify the root cause of the performance degradation. What should you do?

Options:

A.

Use Logs Explorer to analyze log data.

B.

Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.

C.

Use Error Reporting to count, analyze, and aggregate the data.

D.

Use Cloud Debugger to inspect the state of an application.

Buy Now
Questions 11

Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. Thedatabase will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?

Options:

A.

In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.

B.

In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use

C.

In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

D.

In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.

Buy Now
Questions 12

You are configuring a brand new Cloud SQL for PostgreSQL database instance in Google Cloud. Your application team wants you to deploy one primary instance, one standby instance, and one read replica instance. You need to ensure that you are following Google-recommended practices for high availability. What should you do?

Options:

A.

Configure the primary instance in zone A, the standby instance in zone C, and the read replica in zone B, all in the same region.

B.

Configure the primary and standby instances in zone A and the read replica in zone B, all in the same region.

C.

Configure the primary instance in one region, the standby instance in a second region, and the read replica in a third region.

D.

Configure the primary, standby, and read replica instances in zone A, all in the same region.

Buy Now
Questions 13

Your organization has an existing app that just went viral. The app uses a Cloud SQL for MySQL backend database that is experiencing slow disk performance while using hard disk drives (HDDs). You need to improve performance and reduce disk I/O wait times. What should you do?

Options:

A.

Export the data from the existing instance, and import the data into a new instance with solid-state drives (SSDs).

B.

Edit the instance to change the storage type from HDD to SSD.

C.

Create a high availability (HA) failover instance with SSDs, and perform a failover to the new instance.

D.

Create a read replica of the instance with SSDs, and perform a failover to the new instance

Buy Now
Questions 14

You are writing an application that will run on Cloud Run and require a database running in the Cloud SQL managed service. You want to secure this instance so that it only receives connections from applications running in your VPC environment in Google Cloud. What should you do?

Options:

A.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Use Cloud SQL Auth proxy to connect to the instance.

B.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Connect to the instance using a connection pool to best manage connections to the instance.

C.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Use Cloud SQL Auth proxy to connect to the instance.

D.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Connect to the instance using a connection pool to best manage connections to the instance.

Buy Now
Questions 15

Your organization has strict policies on tracking rollouts to production and periodically shares this information with external auditors to meet compliance requirements. You need to enable auditing on several Cloud Spanner databases. What should you do?

Options:

A.

Use replication to roll out changes to higher environments.

B.

Use backup and restore to roll out changes to higher environments.

C.

Use Liquibase to roll out changes to higher environments.

D.

Manually capture detailed DBA audit logs when changes are rolled out to higher environments.

Buy Now
Questions 16

Your organization is running a low-latency reporting application on Microsoft SQL Server. In addition to the database engine, you are using SQL Server Analysis Services (SSAS), SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS) in your on-premises environment. You want to migrate your Microsoft SQL Server database instances to Google Cloud. You need to ensure minimal disruption to the existing architecture during migration. What should you do?

Options:

A.

Migrate to Cloud SQL for SQL Server.

B.

Migrate to Cloud SQL for PostgreSQL.

C.

Migrate to Compute Engine.

D.

Migrate to Google Kubernetes Engine (GKE).

Buy Now
Questions 17

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

Options:

A.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.

B.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.

C.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.

D.

Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.

Buy Now
Questions 18

You are migrating an on-premises application to Compute Engine and Cloud SQL. The application VMs will live in their own project, separate from the Cloud SQL instances which have their own project. What should you do to configure the networks?

Options:

A.

Create a new VPC network in each project, and use VPC Network Peering to connect the two together.

B.

Create a Shared VPC that both the application VMs and Cloud SQL instances will use.

C.

Use the default networks, and leverage Cloud VPN to connect the two together.

D.

Place both the application VMs and the Cloud SQL instances in the default network of each project.

Buy Now
Questions 19

You host an application in Google Cloud. The application is located in a single region and uses Cloud SQL for transactional data. Most of your users are located in the same time zone and expect the application to be available 7 days a week, from 6 AM to 10PM. You want to ensure regular maintenance updates to your Cloud SQL instance without creating downtime for your users. What should you do?

Options:

A.

Configure a maintenance window during a period when no users will be on the system. Control the order of update by setting non-production instances to earlier and production instances to later.

B.

Create your database with one primary node and one read replica in the region.

C.

Enable maintenance notifications for users, and reschedule maintenance activities to a specific time after notifications have been sent.

D.

Configure your Cloud SQL instance with high availability enabled.

Buy Now
Questions 20

You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?

Options:

A.

Manually scale down the number of nodes after the peak period has passed.

B.

Use interleaving to co-locate parent and child rows.

C.

Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.

D.

Use granular instance sizing in Cloud Spanner and Autoscaler.

Buy Now
Questions 21

You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?

Options:

A.

Use Cloud SQL instance monitoring.

B.

Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

C.

Use Cloud SQL logs.

D.

Use the SQL Server Always On Availability Group dashboard.

Buy Now
Questions 22

You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?

Options:

A.

Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.

B.

Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.

C.

Use Database Migration Service to migrate your database.

D.

Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.

Buy Now
Questions 23

You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide. Your solution must support data residency requirements and include a strategy to:

configure where data is stored

control where the encryption keys are stored

govern the access to data

What should you do?

Options:

A.

Replicate Cloud SQL databases across different zones.

B.

Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.

C.

Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.

D.

Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.

Buy Now
Questions 24

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices anduse Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

Options:

A.

Use Database Migration Service to migrate all databases to Cloud SQL.

B.

Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.

C.

Use data replication tools and CDC tools to enable migration.

D.

Use a combination of Database Migration Service and partner tools to support the data migration strategy.

Buy Now
Questions 25

You are setting up a new AlloyDB instance and want users to be able to use their existing identity and Access Managemen (IAM) identities to connect to AlloyDB. You have performed the following steps:

Manually enabled IAM authentication on the AlloyDB instance

Granted the allowdb-databaseUser and serviceusage.serviceusageConsumer IAM roles to the users

Created new AllowDB database users based on corresponding IAM identities

Users are able to connect but are reporting that they are not able to SELECT from application tables. What should you do?

Options:

A.

Grant the alloydb.client IAM role to each user.

B.

Grant the alloydb.viewer IAM role to each user.

C.

Grant the new database users access privileges to the appropriate tables.

D.

Grant the alloydb.alloydbreplica IAM role to each user.

Buy Now
Questions 26

Your organization is migrating 50 TB Oracle databases to Bare Metal Solution for Oracle. Database backups must be available for quick restore. You also need to have backups available for 5 years. You need to design a cost-effective architecture that meets a recovery time objective (RTO) of 2 hours and recovery point objective (RPO) of 15 minutes. What should you do?

Options:

A.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on all flash storage.

Keep backups older than one day stored in Actifio OnVault storage.

B.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on standard storage.

Keep backups older than one day stored in Actifio OnVault storage.

C.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on standard storage.

Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.

D.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on all flash storage.

Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.

Buy Now
Questions 27

Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to followGoogle-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring. What should you do?

Options:

A.

Use Cloud Spanner instead of Cloud SQL.

B.

Increase the number of CPUs for your instance.

C.

Increase the storage size for the instance.

D.

Use many smaller Cloud SQL instances.

Buy Now
Questions 28

Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?

Options:

A.

Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.

B.

Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.

C.

Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.

D.

Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.

Buy Now
Questions 29

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

Options:

A.

Use the Google Cloud Console or gcloud CLI to manually create a new clone database.

B.

Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.

C.

Verify that the new replica is created automatically.

D.

Start the original primary instance and resume replication.

Buy Now
Questions 30

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue. What should you do first?

Options:

A.

Increase the number of processing units.

B.

Modify the database schema, and add additional indexes.

C.

Shard data required by the application into multiple instances.

D.

Decrease the number of processing units.

Buy Now
Questions 31

Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?

Options:

A.

Use Database Migration Service to connect to your on-premises database, and choose continuous replication.

After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.

B.

Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.

Schedule downtime to run each Cloud Data Fusion pipeline.

Verify that the migration was successful.

Re-point the applications to the Cloud SQL for MySQL instance.

C.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in compressed format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

D.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in CSV format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

Buy Now
Questions 32

Your hotel booking company is expanding into Country A, where personally identifiable information (PII) must comply with regional data residency requirements and audits. You need to isolate customer data in Country A from the rest of the customer data. Youwant to design a multi-tenancy strategy to efficiently manage costs and operations. What should you do?

Options:

A.

Apply a schema data management pattern.

B.

Apply an instance data management pattern.

C.

Apply a table data management pattern.

D.

Apply a database data management pattern.

Buy Now
Questions 33

You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command.

What should you do?

Options:

A.

Use gcloud sql instances failover .

B.

Use gcloud sql instances failover .

C.

Use gcloud sql instances promote-replica .

D.

Use gcloud sql instances promote-replica .

Buy Now
Questions 34

You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?

Options:

A.

Configure Firestore, and modify your application to offload queries.

B.

Configure Bigtable, and modify your application to offload queries.

C.

Configure Cloud SQL for PostgreSQL read replicas to offload queries.

D.

Configure Memorystore, and modify your application to offload queries.

Buy Now
Questions 35

You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?

Options:

A.

Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.

B.

Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences

C.

Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

D.

Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

Buy Now
Questions 36

You are a DBA on a Cloud Spanner instance with multiple databases. You need to assign these privileges to all members of the application development team on a specific database:

Can read tables, views, and DDL

Can write rows to the tables

Can add columns and indexes

Cannot drop the database

What should you do?

Options:

A.

Assign the Cloud Spanner Database Reader and Cloud Spanner Backup Writer roles.

B.

Assign the Cloud Spanner Database Admin role.

C.

Assign the Cloud Spanner Database User role.

D.

Assign the Cloud Spanner Admin role.

Buy Now
Questions 37

You want to migrate an on-premises mission-critical PostgreSQL database to Cloud SQL. The database must be able to withstand a zonal failure with less than five minutes of downtime and still not lose any transactions. You want to follow Google-recommended practices for the migration. What should you do?

Options:

A.

Take nightly snapshots of the primary database instance, and restore them in a secondary zone.

B.

Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.

C.

Create a read replica in another region, and promote the read replica if a failure occurs.

D.

Enable high availability (HA) for the database to make it regional.

Buy Now
Questions 38

You are migrating critical production database from Amazon RDS for MySQL to Cloud SQL for MYSQL by using Google Cloud’s Migration Service.

You want to keep disruption to your production database to minimum and, at the same time, optimize migration performance. What should you do?

Options:

A.

Create and start multiple Database Migration Service jobs to migrate your database to the target Cloud SQL for MySQL instance.

B.

Upgrade the Amazon RDS for MySQL primary instance to an instance with more vCPUs and memory, and then run Google Cloud's Database Migration Service.

C.

Create a single Database Migration Service migration job with initial load parallelism configured to maximum on the source Amazon RDS for MySQL read replica.

D.

Create a single Database Migration Service migration job with initial Load Parallelism configured to Maximum on the Amazon RDS for MySQL primary instance.

Buy Now
Questions 39

You want to migrate an on-premises 100 TB Microsoft SQL Server database to Google Cloud over a 1 Gbps network link. You have 48 hours allowed downtime to migrate this database. What should you do? (Choose two.)

Options:

A.

Use a change data capture (CDC) migration strategy.

B.

Move the physical database servers from on-premises to Google Cloud.

C.

Keep the network bandwidth at 1 Gbps, and then perform an offline data migration.

D.

Increase the network bandwidth to 2 Gbps, and then perform an offline data migration.

E.

Increase the network bandwidth to 10 Gbps, and then perform an offline data migration.

Buy Now
Questions 40

You are setting up a Bare Metal Solution environment. You need to update the operating system to the latest version. You need to connect the Bare Metal Solution environment to the internet so you can receive software updates. What should you do?

Options:

A.

Setup a static external IP address in your VPC network.

B.

Set up bring your own IP (BYOIP) in your VPC.

C.

Set up a Cloud NAT gateway on the Compute Engine VM.

D.

Set up Cloud NAT service.

Buy Now
Questions 41

Your company is migrating all legacy applications to Google Cloud. All on-premises applications are using legacy Oracle 12c databases with Oracle Real Application Cluster (RAC) for high availability (HA) and Oracle Data Guard for disaster recovery. You need a solution that requires minimal code changes, provides the same high availability you have today on-premises, and supports a low latency network for migrated legacy applications. What should you do?

Options:

A.

Migrate the databases to Cloud Spanner.

B.

Migrate the databases to Cloud SQL, and enable a standby database.

C.

Migrate the databases to Compute Engine using regional persistent disks.

D.

Migrate the databases to Bare Metal Solution for Oracle.

Buy Now
Questions 42

You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?

Options:

A.

Close idle connections or restart the instance before beginning the import operation.

B.

Increase the amount of memory allocated to your instance.

C.

Ensure that the service account has the Storage Admin role.

D.

Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.

Buy Now
Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
Last Update: Jul 13, 2025
Questions: 141

PDF + Testing Engine

$57.75  $164.99

Testing Engine

$43.75  $124.99
buy now Professional-Cloud-Database-Engineer testing engine

PDF (Q&A)

$36.75  $104.99
buy now Professional-Cloud-Database-Engineer pdf
dumpsmate guaranteed to pass
24/7 Customer Support

DumpsMate's team of experts is always available to respond your queries on exam preparation. Get professional answers on any topic of the certification syllabus. Our experts will thoroughly satisfy you.

Site Secure

mcafee secure

TESTED 15 Jul 2025