GCP Certified Professional Database Engineer Sample Questions

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

GCP Professional Exam Topics Tests

The Real GCP Certified Database Engineer Exam Questions validate your ability to architect, manage, and optimize data solutions in Google Cloud. The exam focuses on database design, migration strategies, security management, and operational excellence across relational and non-relational systems. To prepare effectively, begin with the GCP Professional Database Engineer Practice Questions. These questions reflect the tone and logic of the actual certification exam, helping you become familiar with Google Cloud’s approach to data services and best practices.

For deeper practice, explore the Professional Database Engineer Braindump set, which provides realistic, scenario-based challenges covering topics like instance configuration, indexing, IAM permissions, and storage class optimization. Each question is designed to enhance your understanding of data performance and resiliency.

Google Certification Exam Simulators

Every section of the GCP Certified Professional DevOps Engineer Questions and Answers collection teaches while testing your knowledge. These materials offer detailed explanations that help you learn the reasoning behind correct answers. Use the Google Certified Database Engineer Exam Simulator and full-length practice tests to build timing awareness and confidence under exam-like conditions.

Real GCP Exam Questions

If you prefer topic-focused study, use the Google Certified Database Engineer Exam Dump and related question sets to concentrate on key areas such as backup policies, database performance monitoring, and schema design. Working through these GCP Professional Database Engineer Sample Questions will help you master both the theory and practice of Google Cloud database operations.

By consistently engaging with these study materials, you will gain the technical depth and analytical mindset needed to manage data systems effectively in any enterprise-grade cloud environment. Start your preparation today and take one step closer to becoming a Google Certified Database Engineer.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Sample Exam Questions

Question 1

You are the Cloud Database Engineer for a retail analytics startup that runs a production Cloud SQL for MySQL instance supporting a product search API. Over the last 72 hours the average query latency during busy periods rose from about 90 ms to roughly 260 ms while CPU and memory metrics remain stable. You need a single step that most directly improves query execution without changing application code. What should you do?

  • ❏ A. Add more read replicas to the Cloud SQL instance

  • ❏ B. Review the slow query log and create or refine indexes for the worst performing queries

  • ❏ C. Convert tables from InnoDB to MyISAM

  • ❏ D. Scale the Cloud SQL instance to a larger machine type with more vCPU and memory

Question 2

Which Google Cloud pairing offers a fully managed MongoDB service with live migration that supports continuous schema changes, data validation, rapid rollback, and cutover in under 20 minutes?

  • ❏ A. Database Migration Service and AlloyDB for PostgreSQL

  • ❏ B. Cloud Dataflow with MongoDB connector and Firestore in Native mode

  • ❏ C. MongoDB Atlas Live Migration and MongoDB Atlas on Google Cloud

Question 3

BrightTrail Outfitters is moving an order processing platform from its data center to Google Cloud. The application depends on a MySQL backend that must remain highly available because it supports mission critical transactions. The business requires a recovery time objective and a recovery point objective of no more than 12 minutes during a regional incident. You will use a Google managed database service. What should you implement to achieve the highest possible uptime for this workload?

  • ❏ A. Migrate the database to multi region Cloud Spanner and refactor the schema and application so that a regional outage does not interrupt service

  • ❏ B. Set up Cloud SQL for MySQL with regional high availability and rely on scheduled backups to restore a new instance in a different region when a disaster occurs

  • ❏ C. Provision Cloud SQL for MySQL with regional high availability and create a cross region read replica that you can promote to primary if the region fails

  • ❏ D. Deploy Cloud SQL for MySQL as a single zone instance and add a read replica in another zone and a second replica in a distant region for disaster recovery

Question 4

Which approach delivers real time alerts for Cloud Spanner transaction failures and also reflects overall application health?

  • ❏ A. Error Reporting notifications

  • ❏ B. Cloud Monitoring alerts on Spanner transaction failure metrics

  • ❏ C. Cloud Logging logs based metric with alerting

Question 5

Riverton Analytics runs Microsoft SQL Server on premises with two read-only replicas, and the nightly differential backups for 28 databases have become expensive and difficult to manage. They plan to move to Google Cloud following recommended practices and they must keep the cutover window under 20 minutes with minimal disruption. What should you do?

  • ❏ A. Create a Compute Engine VM, install SQL Server, and import the backup file

  • ❏ B. Create a Google Kubernetes Engine cluster, deploy SQL Server, and restore the backup file

  • ❏ C. Configure Cloud SQL for SQL Server as a subscriber to the on premises publisher and replicate continuously until you cut over

  • ❏ D. Take a native SQL Server backup to Cloud Storage and restore it into a new Cloud SQL for SQL Server instance

Question 6

Which Google Cloud database offers strongly consistent transactions, supports both relational and JSON data, and delivers globally distributed low latency across three continents?

  • ❏ A. AlloyDB for PostgreSQL

  • ❏ B. Cloud Spanner multi region

  • ❏ C. Bigtable

  • ❏ D. Cloud SQL for PostgreSQL

Question 7

You are planning connectivity for a global retail analytics service that runs in multiple Google Cloud regions and must reach a Cloud Spanner database with very low latency, strong security, and high reliability. Which connection approach should the application adopt to satisfy these requirements?

  • ❏ A. Use the Cloud Spanner client libraries with a regional instance and enable connection pooling

  • ❏ B. Use the Cloud Spanner client libraries against a multi-region instance and configure connection pooling

  • ❏ C. Access Cloud Spanner through the REST API from the app and add a cache layer to improve response time

  • ❏ D. Connect to a fixed public IP address on the Cloud Spanner instance and enable SSL

Question 8

How should you configure a Cloud SQL for PostgreSQL instance in us-central1 to use a customer-managed encryption key from a Cloud KMS key ring in us-central1 with key rotation every 30 days?

  • ❏ A. Enable CMEK on the Cloud SQL instance without attaching a key

  • ❏ B. Create a Cloud KMS key ring and key in us-central1 then enable CMEK on the Cloud SQL instance and select that key

  • ❏ C. Use Google managed encryption

  • ❏ D. Create the KMS key ring and key in us-east1 and select it for the instance

Question 9

You are building the database layer for a multinational ticketing marketplace on Google Cloud. Traffic is unpredictable and can spike by 8x during global presales and seasonal campaigns. The system must deliver strongly consistent transactions, sub second reads for users on three continents, and a 99.999% availability target while still keeping spend under control. Which Google Cloud database service and configuration should you choose to meet these goals?

  • ❏ A. Cloud Bigtable with multi-cluster replication across two regions

  • ❏ B. Cloud SQL for PostgreSQL with regional HA and cross-region read replicas using SSD

  • ❏ C. Cloud Spanner with multi-regional configuration and SSD storage

  • ❏ D. Cloud Firestore in Native mode with regional configuration

Question 10

In Cloud SQL for MySQL, which deployment achieves high availability and disaster recovery while maintaining low read and write latency in the primary region?

  • ❏ A. Single zonal Cloud SQL instance

  • ❏ B. Cloud SQL primary in one region with cross region read replica

  • ❏ C. AlloyDB for PostgreSQL

  • ❏ D. Cloud SQL primary in one zone with same region read replica in another zone

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

Question 11

HelioBank is deploying Cloud SQL for MySQL in europe-west1 to support payment processing for example.com. The compliance team requires that data at rest be encrypted with a key that they own and manage in a specific European location, and they must control the key lifecycle including rotation and disablement. They also need all client connections to be encrypted from end to end. What should they configure?

  • ❏ A. Create the Cloud SQL for MySQL instance with Google managed encryption keys and secure access using VPC Service Controls

  • ❏ B. Create the Cloud SQL for MySQL instance using customer managed encryption keys and place it behind Cloud VPN for all client connectivity

  • ❏ C. Create the Cloud SQL for MySQL instance using customer managed encryption keys from Cloud KMS in the required region and require SSL or TLS for client connections

  • ❏ D. Create the Cloud SQL for MySQL instance with Google managed encryption keys and require SSL or TLS

Question 12

During a migration from a monolith to microservices on Google Cloud, which database strategy minimizes downtime while preserving the option to adopt service specific data stores later?

  • ❏ A. Migrate to Cloud Spanner now and split schemas by service

  • ❏ B. Refactor into microservices while all services keep using the single database for now

  • ❏ C. Create one database per service and cut over all services in one release

  • ❏ D. Add a data access layer service that proxies all database calls

Question 13

At Nimbus Retail you manage a Cloud SQL for MySQL instance called reporting-sql-01. The team needs to export a full database to Cloud Storage and later import it back into the same instance using gcloud without doing any format conversion. Which command should you run to create the export so that it can be imported back into Cloud SQL?

  • ❏ A. gcloud sql export csv reporting-sql-01 gs://example-bucket-42/retail_backup.sql.gz –database=analyticsdb –gzip

  • ❏ B. gsutil cp gs://example-bucket-42/retail_backup.sql.gz | gcloud sql import sql reporting-sql-01 –database=analyticsdb –gzip

  • ❏ C. gcloud sql export sql reporting-sql-01 gs://example-bucket-42/retail_backup.sql.gz –database=analyticsdb –gzip

  • ❏ D. gcloud sql import csv reporting-sql-01 gs://example-bucket-42/export.csv –database=analyticsdb –gzip

Question 14

Which connection method should applications use to connect to a regional Cloud Memorystore for Redis instance to keep 99th percentile latency under 4 milliseconds while minimizing operational overhead?

  • ❏ A. Use a regional load balancer in front of Redis

  • ❏ B. Use the Redis private IP in the same VPC and region

  • ❏ C. Private Service Connect

Question 15

mcnz.com migrated its busy ticketing platform database to Cloud SQL for MySQL six months ago and monitoring now shows persistently low CPU and memory usage which indicates the instance is oversized. What is the most efficient action to reduce costs while maintaining the current performance levels?

  • ❏ A. Migrate the database to a self managed MySQL instance on Compute Engine to gain cost control

  • ❏ B. Turn off automated backups to reduce storage charges

  • ❏ C. Review and apply Cloud SQL rightsizing recommendations for the instance

  • ❏ D. Resize to a smaller machine type that matches metrics from the last 90 days

Question 16

When comparing Google Cloud managed NoSQL services, which cost component should you consistently emphasize for a precise TCO analysis?

  • ❏ A. Storage capacity costs

  • ❏ B. Query execution and index upkeep costs

  • ❏ C. CMEK for data at rest

Question 17

You are the database specialist at BlueOrbit Logistics and you must grant access for a group of analysts who need to run queries against Cloud Spanner production databases while they must not write data, change schemas, or administer instances. Which IAM role should you assign to meet this requirement?

  • ❏ A. roles/spanner.databaseAdmin

  • ❏ B. roles/spanner.databaseReader

  • ❏ C. roles/spanner.viewer

  • ❏ D. roles/spanner.databaseUser

Question 18

Your application stores session documents in Firestore and queries them by steps, sessionDuration, and energyBurned. After a 30 day test period, latency is acceptable but costs are high. Which Firestore change would reduce cost while preserving performance and availability?

  • ❏ A. Add Cloud Memorystore cache

  • ❏ B. Denormalize documents

  • ❏ C. Trim Firestore indexes to only required ones

  • ❏ D. Use batched writes for lower cost

Question 19

Larkspur Systems is building a browser based workspace where teams coauthor very large documents in real time. Each document can reach 25 MB and the system must handle about 60,000 edits per minute from users spread across three regions and it must keep a complete version timeline for every file with strong consistency. Which managed database on Google Cloud best satisfies these requirements?

  • ❏ A. Cloud Bigtable

  • ❏ B. Cloud Spanner

  • ❏ C. Cloud SQL with read replicas

  • ❏ D. Firestore in Datastore mode

Question 20

How should you export approximately 40 million Firestore documents to BigQuery while ensuring fault tolerance, automatic scaling, and the option to apply transformations?

  • ❏ A. BigQuery Data Transfer Service

  • ❏ B. Cloud Dataflow with Apache Beam

  • ❏ C. Firestore export to Cloud Storage and BigQuery load

  • ❏ D. Cloud Functions streaming inserts

Exam Questions Answered

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

Question 1

You are the Cloud Database Engineer for a retail analytics startup that runs a production Cloud SQL for MySQL instance supporting a product search API. Over the last 72 hours the average query latency during busy periods rose from about 90 ms to roughly 260 ms while CPU and memory metrics remain stable. You need a single step that most directly improves query execution without changing application code. What should you do?

  • ✓ B. Review the slow query log and create or refine indexes for the worst performing queries

The correct answer is Review the slow query log and create or refine indexes for the worst performing queries.

This approach targets the root cause of rising latency when CPU and memory are stable. The slow query log reveals statements that take the longest time so you can focus on the highest impact fixes. Adding or improving indexes gives the optimizer better access paths and can eliminate full scans and sorts which reduces latency without any application changes. You can validate improvements with EXPLAIN and by measuring new runtimes after index changes.

Add more read replicas to the Cloud SQL instance is not the best single step to speed up individual query execution. Replicas distribute read traffic and increase aggregate throughput but they do not change the query plan or access paths of a given query. A slow query remains slow on the replica and replicas can introduce lag.

Convert tables from InnoDB to MyISAM is inappropriate for a production workload. MyISAM lacks transactions, row level locking, and crash recovery which harms reliability and concurrency. It does not inherently improve query execution for typical OLTP patterns and it is not recommended for Cloud SQL environments that rely on high availability and backup features.

Scale the Cloud SQL instance to a larger machine type with more vCPU and memory is unlikely to help since CPU and memory are already stable. Scaling up can add capacity but it does not fix inefficient query plans or missing indexes, so it often fails to reduce latency for poorly indexed queries and increases cost without addressing the cause.

When latency rises but CPU and memory are steady, think query plan and indexes rather than more hardware. Start with the slow query log, use EXPLAIN to confirm access paths, and design selective composite indexes for the filters and joins that matter most.

Question 2

Which Google Cloud pairing offers a fully managed MongoDB service with live migration that supports continuous schema changes, data validation, rapid rollback, and cutover in under 20 minutes?

  • ✓ C. MongoDB Atlas Live Migration and MongoDB Atlas on Google Cloud

The correct option is MongoDB Atlas Live Migration and MongoDB Atlas on Google Cloud.

MongoDB Atlas provides a fully managed MongoDB service on Google Cloud with automated operations, scaling, and high availability. Atlas Live Migration performs an initial sync then continuously replicates changes from the source, which allows you to keep applications online while you validate data and adjust schemas. It supports data validation and enables a quick cutover with the ability to roll back by directing traffic back to the source if needed.

Database Migration Service and AlloyDB for PostgreSQL is incorrect because Database Migration Service does not support MongoDB as a source and AlloyDB is a PostgreSQL compatible database rather than MongoDB, so this pairing cannot provide a managed MongoDB target or the Atlas live migration features described.

Cloud Dataflow with MongoDB connector and Firestore in Native mode is incorrect because Firestore is not MongoDB and a Dataflow pipeline is not a managed live migration service that offers continuous replication, built in validation, quick rollback, and predictable cutover windows.

When a question asks for a fully managed MongoDB service on Google Cloud with live migration and fast rollback, look for MongoDB Atlas and its Live Migration feature. Eliminate Google managed databases that are not MongoDB and tools that require building custom pipelines.

Question 3

BrightTrail Outfitters is moving an order processing platform from its data center to Google Cloud. The application depends on a MySQL backend that must remain highly available because it supports mission critical transactions. The business requires a recovery time objective and a recovery point objective of no more than 12 minutes during a regional incident. You will use a Google managed database service. What should you implement to achieve the highest possible uptime for this workload?

  • ✓ C. Provision Cloud SQL for MySQL with regional high availability and create a cross region read replica that you can promote to primary if the region fails

The correct option is Provision Cloud SQL for MySQL with regional high availability and create a cross region read replica that you can promote to primary if the region fails.

This approach gives you automatic failover within the region to cover zonal outages and it provides a ready to use replica in another region for disaster recovery. You can promote the replica if the entire region becomes unavailable. Replication lag is usually small which helps keep the recovery point objective within the required window and promotion can be performed quickly which supports the recovery time objective. This design therefore maximizes uptime by addressing both local and regional failure scenarios while staying on a managed MySQL service.

Migrate the database to multi region Cloud Spanner and refactor the schema and application so that a regional outage does not interrupt service is incorrect because it replaces MySQL with a different database engine and demands significant refactoring of schema and code. While it offers strong availability, it does not align with a lift and shift of a MySQL workload and would add time and risk rather than meeting the stated goal.

Set up Cloud SQL for MySQL with regional high availability and rely on scheduled backups to restore a new instance in a different region when a disaster occurs is incorrect because restores from scheduled backups typically exceed a twelve minute recovery time objective and the recovery point objective would be as old as the last backup. This would not meet the business requirement for minimal downtime and data loss during a regional incident.

Deploy Cloud SQL for MySQL as a single zone instance and add a read replica in another zone and a second replica in a distant region for disaster recovery is incorrect because a single zone primary lacks automatic failover and does not deliver the highest availability. Read replicas do not fail over automatically and promoting them during an incident is manual and slower, which makes it difficult to meet tight recovery objectives.

When a question specifies MySQL and tight RTO and RPO, prefer Cloud SQL high availability within a region plus a cross region read replica for disaster recovery, and be cautious of answers that rely only on backups.

Question 4

Which approach delivers real time alerts for Cloud Spanner transaction failures and also reflects overall application health?

  • ✓ B. Cloud Monitoring alerts on Spanner transaction failure metrics

The correct option is Cloud Monitoring alerts on Spanner transaction failure metrics.

This approach uses the native metrics that Cloud Spanner exports to Monitoring which include error and abort signals as well as latency and throughput. You can create alerting policies that evaluate these metrics at short intervals so you get near real time notifications when transaction failures exceed a threshold. You can also build dashboards and service level objectives in the same place which provides a consolidated view of overall application health.

Error Reporting notifications focuses on aggregating application exceptions and stack traces from services and code. It does not directly track database transaction failure metrics and it does not provide a metrics driven health view of Spanner so it is not the best way to alert on Spanner transaction failures.

Cloud Logging logs based metric with alerting can be made to work but it relies on log entries rather than the authoritative Spanner metrics. This adds noise and potential delay and it does not naturally provide a complete health dashboard for the application. The native Spanner metrics in Monitoring are the recommended and more reliable source for timely alerts and health insights.

When a question asks for real time alerting and overall health think of Monitoring metrics and alerting policies first and prefer built in service metrics over logs or error aggregations.

Question 5

Riverton Analytics runs Microsoft SQL Server on premises with two read-only replicas, and the nightly differential backups for 28 databases have become expensive and difficult to manage. They plan to move to Google Cloud following recommended practices and they must keep the cutover window under 20 minutes with minimal disruption. What should you do?

  • ✓ C. Configure Cloud SQL for SQL Server as a subscriber to the on premises publisher and replicate continuously until you cut over

The correct option is Configure Cloud SQL for SQL Server as a subscriber to the on premises publisher and replicate continuously until you cut over. This uses transactional replication to keep the managed Cloud SQL instance nearly in sync before the switch so you can meet the under 20 minute cutover and minimize disruption.

With this approach you seed the data and then stream changes continuously so the target stays current. When you are ready to migrate you stop writes on the source if needed, allow the subscriber to catch up, switch the application connection string, and complete the cutover with only a brief outage. This aligns with recommended practices because it leverages a managed service, reduces operational overhead, and avoids the complexity and cost of frequent differential backups across many databases.

Create a Compute Engine VM, install SQL Server, and import the backup file is a one time restore that does not keep the target synchronized with ongoing changes. You would either need extended downtime or complex change capture to reconcile deltas, and you would be fully responsible for managing SQL Server on the VM which does not meet the goal of minimal disruption.

Create a Google Kubernetes Engine cluster, deploy SQL Server, and restore the backup file places a stateful database in containers and still performs only a static restore. This is not the recommended path for migrating SQL Server to Google Cloud for low downtime and it would add significant operational complexity without providing continuous replication to meet the tight cutover window.

Take a native SQL Server backup to Cloud Storage and restore it into a new Cloud SQL for SQL Server instance results in a point in time copy with no ongoing change replication. It cannot maintain synchronization with production writes which makes it difficult to achieve a sub 20 minute cutover for an active workload with many databases.

When the requirement highlights minimal downtime and a short cutover window, look for managed migration patterns that keep data in sync ahead of time. In Google Cloud this often points to replication into Cloud SQL rather than one time backup and restore.

Question 6

Which Google Cloud database offers strongly consistent transactions, supports both relational and JSON data, and delivers globally distributed low latency across three continents?

  • ✓ B. Cloud Spanner multi region

The correct option is Cloud Spanner multi region because it delivers strongly consistent transactions, supports both relational schema and a JSON data type, and provides globally distributed low latency across continents.

This database offers true strong consistency with ACID transactions and synchronous replication, so reads and writes remain consistent even when distributed. It supports standard SQL with a relational model and includes native JSON support, which satisfies the requirement for both relational and JSON. Its multi region configuration replicates data across geographically separate regions to provide low latency access around the world while maintaining consistency.

AlloyDB for PostgreSQL is optimized for PostgreSQL compatibility and high performance and it supports transactions and JSONB, yet it is a regional service and does not provide a built in globally distributed configuration across three continents with strong consistency for writes.

Bigtable is a wide column NoSQL store that is excellent for very large scale workloads, but it is not a relational database and it does not offer multi row transactional semantics in the way the question requires.

Cloud SQL for PostgreSQL is a managed PostgreSQL service that supports transactions and JSONB, yet it is deployed in a single region and does not provide a global multi region deployment with strong consistency across continents.

When you see the combination of strongly consistent transactions, global distribution, and relational plus JSON support, map it to Spanner. If the option is clearly single region, it is unlikely to meet a three continent requirement.

Question 7

You are planning connectivity for a global retail analytics service that runs in multiple Google Cloud regions and must reach a Cloud Spanner database with very low latency, strong security, and high reliability. Which connection approach should the application adopt to satisfy these requirements?

  • ✓ B. Use the Cloud Spanner client libraries against a multi-region instance and configure connection pooling

The correct answer is Use the Cloud Spanner client libraries against a multi-region instance and configure connection pooling.

A multi-region Cloud Spanner instance provides globally distributed replicas with strong consistency and very high availability, which aligns with the needs of a global application. The client libraries use efficient gRPC connections and integrate with IAM for strong authentication and authorization. Connection pooling minimizes session and channel creation overhead, which improves throughput and tail latency while increasing reliability through automatic retries and health checks.

Use the Cloud Spanner client libraries with a regional instance and enable connection pooling is not sufficient for a global service because a regional instance concentrates data in one location, which increases latency for distant users and offers lower availability compared to a multi-region configuration.

Access Cloud Spanner through the REST API from the app and add a cache layer to improve response time introduces additional overhead and typically results in higher latency than the gRPC client libraries. A cache cannot guarantee the strong consistency properties that Cloud Spanner provides and it adds complexity that does not meet the requirement for low latency and strong consistency.

Connect to a fixed public IP address on the Cloud Spanner instance and enable SSL is not viable because Cloud Spanner does not expose a fixed public IP per instance. Access is through managed service endpoints and the recommended approach is to use the client libraries, which already use TLS and IAM to secure connections.

When you see global reach with very low latency and high reliability for Cloud Spanner, prefer multi-region instances and the official client libraries with connection pooling. Be wary of answers that rely on REST or fixed IPs because Spanner is accessed through managed endpoints with gRPC and IAM.

Question 8

How should you configure a Cloud SQL for PostgreSQL instance in us-central1 to use a customer-managed encryption key from a Cloud KMS key ring in us-central1 with key rotation every 30 days?

  • ✓ B. Create a Cloud KMS key ring and key in us-central1 then enable CMEK on the Cloud SQL instance and select that key

The correct option is Create a Cloud KMS key ring and key in us-central1 then enable CMEK on the Cloud SQL instance and select that key.

Cloud SQL for PostgreSQL supports customer managed encryption keys and the key must be created in the same region as the instance. Creating the key ring and key in us-central1 and enabling CMEK on the instance ensures the service can access the key and satisfies the location requirement. You configure a 30 day rotation schedule on the Cloud KMS key and Cloud SQL will automatically use the current primary key version so ongoing rotations do not require application changes.

Enable CMEK on the Cloud SQL instance without attaching a key is incorrect because Cloud SQL requires you to specify a Cloud KMS key when enabling CMEK and it does not create or choose a key for you.

Use Google managed encryption is incorrect because the requirement is to use a customer managed key and Google managed encryption uses provider managed keys and does not let you choose the key or set a rotation schedule.

Create the KMS key ring and key in us-east1 and select it for the instance is incorrect because the key location must match the Cloud SQL instance region and cross region CMEK is not supported for Cloud SQL.

When a question mentions CMEK and a specific region always align the key location with the resource location and remember that key rotation is configured in Cloud KMS rather than in the service.

Question 9

You are building the database layer for a multinational ticketing marketplace on Google Cloud. Traffic is unpredictable and can spike by 8x during global presales and seasonal campaigns. The system must deliver strongly consistent transactions, sub second reads for users on three continents, and a 99.999% availability target while still keeping spend under control. Which Google Cloud database service and configuration should you choose to meet these goals?

  • ✓ C. Cloud Spanner with multi-regional configuration and SSD storage

The correct choice is Cloud Spanner with multi-regional configuration and SSD storage.

Cloud Spanner meets the requirement for strongly consistent transactions through externally consistent reads and writes, which allows you to process OLTP workloads without violating correctness during spikes. The multi-regional configuration provides a five nines availability target through synchronous replication and automatic failover, which aligns with the 99.999 percent requirement. SSD storage underpins low latency performance.

For users on three continents, a global multi-region such as a configuration that spans North America, Europe, and Asia enables sub second reads from nearby replicas using bounded staleness, while transactions remain strongly consistent by committing through the leader. Cloud Spanner also scales horizontally and can use processing units and compute autoscaling so you can expand capacity during 8x surges and reduce it when demand subsides to help keep spend under control.

Cloud Bigtable with multi-cluster replication across two regions cannot satisfy strongly consistent cross-region transactions and multi-cluster replication is eventually consistent across clusters. It is optimized for wide column analytics and key value workloads rather than globally consistent transactional workloads, and two regions will not deliver sub second reads for users on three continents or a five nines relational availability target.

Cloud SQL for PostgreSQL with regional HA and cross-region read replicas using SSD uses asynchronous cross-region replicas, so reads outside the primary region are not strongly consistent. It is a single primary architecture with vertical scaling limits, which makes handling 8x traffic spikes difficult, and it does not offer a five nines multi-regional SLA.

Cloud Firestore in Native mode with regional configuration is confined to one region in this option, so it cannot provide low latency reads for users on three continents or a five nines multi-regional availability target. While Firestore supports ACID transactions at the document level and can scale, the regional setup in this choice does not meet the global availability and latency goals.

Map hard requirements to product guarantees. If you see global availability, five nines, and strongly consistent transactions together, think of globally distributed relational options first. Then confirm latency and cost controls such as read locality features and autoscaling.

Question 10

In Cloud SQL for MySQL, which deployment achieves high availability and disaster recovery while maintaining low read and write latency in the primary region?

  • ✓ D. Cloud SQL primary in one zone with same region read replica in another zone

The correct option is Cloud SQL primary in one zone with same region read replica in another zone.

This design keeps all traffic within the primary region so application read and write latency remains low. Placing the replica in a different zone provides resilience to zonal failures which delivers high availability. You can promote the replica quickly if the primary becomes unavailable which supports disaster recovery objectives without routing data across regions. Replication is asynchronous which limits write impact on the primary and it also adds local read capacity.

Single zonal Cloud SQL instance offers no redundancy and has no failover target, so it cannot meet high availability or disaster recovery needs.

Cloud SQL primary in one region with cross region read replica is focused on regional disaster recovery rather than availability in the primary region. It does not provide an in region failover target and it can increase replication lag and operational complexity for promotion across regions, so it is not the best fit when you want to keep latency low in the primary region.

AlloyDB for PostgreSQL is a different managed database for PostgreSQL and does not address a requirement for Cloud SQL for MySQL.

When a question asks for both high availability and low latency in the primary region, prefer a same region multi zone design. Use cross region replicas when the scenario emphasizes regional disaster recovery. Remember that Cloud SQL read replicas are asynchronous and require promotion during failover.

Question 11

HelioBank is deploying Cloud SQL for MySQL in europe-west1 to support payment processing for example.com. The compliance team requires that data at rest be encrypted with a key that they own and manage in a specific European location, and they must control the key lifecycle including rotation and disablement. They also need all client connections to be encrypted from end to end. What should they configure?

  • ✓ C. Create the Cloud SQL for MySQL instance using customer managed encryption keys from Cloud KMS in the required region and require SSL or TLS for client connections

The correct option is Create the Cloud SQL for MySQL instance using customer managed encryption keys from Cloud KMS in the required region and require SSL or TLS for client connections.

This choice lets HelioBank own and manage the data encryption key through Cloud KMS in a European location that matches their compliance needs. With CMEK they control key rotation, disabling, and destruction which fulfills the lifecycle requirement. Requiring SSL or TLS on the instance ensures all client connections are encrypted in transit from the client to Cloud SQL and provides end to end protection.

Create the Cloud SQL for MySQL instance with Google managed encryption keys and secure access using VPC Service Controls is not correct because Google managed keys do not give the customer ownership or lifecycle control of the key. VPC Service Controls helps reduce data exfiltration risk across service perimeters but it does not provide customer managed encryption at rest or enforce client to database encryption.

Create the Cloud SQL for MySQL instance using customer managed encryption keys and place it behind Cloud VPN for all client connectivity is not correct because a VPN encrypts links between networks rather than the end to end database session. You would still need to enforce SSL or TLS to meet the requirement for encrypted client connections and requiring VPN for all clients is unnecessary and may not even apply for public or cross region access.

Create the Cloud SQL for MySQL instance with Google managed encryption keys and require SSL or TLS is not correct because requiring SSL or TLS would protect traffic in transit but Google managed keys do not satisfy the requirement to own and manage the data encryption key.

Map requirements to controls. Data at rest with keys you own means CMEK in Cloud KMS with a matching key location. End to end client encryption means SSL or TLS or the Cloud SQL Auth Proxy. Perimeter tools like VPC Service Controls or tunnels like Cloud VPN do not replace these.

Question 12

During a migration from a monolith to microservices on Google Cloud, which database strategy minimizes downtime while preserving the option to adopt service specific data stores later?

  • ✓ B. Refactor into microservices while all services keep using the single database for now

The correct answer is Refactor into microservices while all services keep using the single database for now because it minimizes downtime and risk while you decompose the monolith. Keeping a single source of truth lets you split application logic first and defer data separation until each service boundary is stable.

This approach allows teams to migrate incrementally. You can preserve existing transactions and consistency guarantees while extracting services. As each service stabilizes you can introduce service owned tables or new data stores and migrate data with techniques such as change data capture or controlled backfills. This sequence reduces operational risk since you avoid changing application architecture and database architecture at the same time.

The option Migrate to Cloud Spanner now and split schemas by service is incorrect because it couples a major database technology migration with a service decomposition effort. Doing both at once increases risk and potential downtime, and you can postpone a move to a globally distributed database until you have clear service boundaries and validated data access patterns.

The option Create one database per service and cut over all services in one release is incorrect because it is a big bang cutover that is hard to coordinate and test. It magnifies failure blast radius and makes rollback difficult, which contradicts the goal of minimizing downtime.

The option Add a data access layer service that proxies all database calls is incorrect because it creates a central bottleneck and a single point of failure. It also preserves tight coupling at the data layer and works against the goal of true service ownership of data while adding latency and operational complexity.

When a question asks how to minimize downtime during migration, favor incremental steps that change one major dimension at a time. Keep a single source of truth first, validate service boundaries, then move to service owned data stores with controlled data migration techniques.

Question 13

At Nimbus Retail you manage a Cloud SQL for MySQL instance called reporting-sql-01. The team needs to export a full database to Cloud Storage and later import it back into the same instance using gcloud without doing any format conversion. Which command should you run to create the export so that it can be imported back into Cloud SQL?

  • ✓ C. gcloud sql export sql reporting-sql-01 gs://example-bucket-42/retail_backup.sql.gz –database=analyticsdb –gzip

The correct option is gcloud sql export sql reporting-sql-01 gs://example-bucket-42/retail_backup.sql.gz –database=analyticsdb –gzip.

This command creates a SQL dump that includes schema and data which is the format that Cloud SQL for MySQL expects for a straightforward import using the SQL import command. It writes directly to Cloud Storage, targets the specified database, and compresses the dump which can later be imported back without any conversion.

gcloud sql export csv reporting-sql-01 gs://example-bucket-42/retail_backup.sql.gz –database=analyticsdb –gzip is incorrect because CSV exports table data without schema and do not produce a SQL dump. It also mismatches the file extension which suggests SQL rather than CSV, and CSV would not allow a full database restore without additional steps.

gsutil cp gs://example-bucket-42/retail_backup.sql.gz | gcloud sql import sql reporting-sql-01 –database=analyticsdb –gzip is incorrect because it does not perform any export and the import command expects a Cloud Storage URI argument rather than piped input. The gsutil command is also incomplete since it lacks a destination.

gcloud sql import csv reporting-sql-01 gs://example-bucket-42/export.csv –database=analyticsdb –gzip is incorrect because it is an import command and it uses CSV which cannot reconstruct a full database with schema. It does not address the requirement to create an export that can be imported back without conversion.

When a question asks for an export that can be imported back without format conversion for MySQL, look for the SQL dump export. CSV is for table data only and lacks schema, while gcloud sql export sql paired with Cloud Storage is designed for round‑trip export and import.

Question 14

Which connection method should applications use to connect to a regional Cloud Memorystore for Redis instance to keep 99th percentile latency under 4 milliseconds while minimizing operational overhead?

  • ✓ B. Use the Redis private IP in the same VPC and region

The correct option is Use the Redis private IP in the same VPC and region.

Connecting directly over the instance private address within the same network and region keeps the path short and predictable which helps keep the 99th percentile latency under four milliseconds. This direct approach has no extra proxies and no cross region hops and it also minimizes operational overhead because there is nothing to deploy or manage between your clients and the managed Redis endpoint.

Use a regional load balancer in front of Redis is incorrect because Memorystore for Redis is not a load balancer backend and inserting a load balancer would add an extra hop that increases tail latency and requires additional configuration and maintenance.

Private Service Connect is incorrect for this requirement because PSC is designed for publishing and consuming services across networks which adds an indirection layer and more configuration. For a single VPC in the same region the direct private IP path is simpler and faster.

Look for keywords like same VPC and same region when low latency and minimal operations are required. A direct private IP connection usually beats designs that insert proxies or service connectors.

Question 15

mcnz.com migrated its busy ticketing platform database to Cloud SQL for MySQL six months ago and monitoring now shows persistently low CPU and memory usage which indicates the instance is oversized. What is the most efficient action to reduce costs while maintaining the current performance levels?

  • ✓ C. Review and apply Cloud SQL rightsizing recommendations for the instance

The correct option is Review and apply Cloud SQL rightsizing recommendations for the instance.

The Cloud SQL rightsizing recommendations feature evaluates actual CPU and memory utilization over time and suggests an appropriately sized machine type that maintains headroom and performance. Because it is purpose built for Cloud SQL and driven by observed metrics, applying the recommendation is the most efficient way to cut instance costs while preserving the current performance profile, and it can be carried out with a brief restart during a maintenance window.

Migrate the database to a self managed MySQL instance on Compute Engine to gain cost control is not efficient because it replaces a managed service with operational overhead and maintenance responsibilities, and it does not guarantee lower total cost. It also introduces migration risk without addressing the simple sizing issue.

Turn off automated backups to reduce storage charges is inappropriate because backups are essential for recovery and reliability, and the savings are minor compared to instance sizing. Disabling backups increases risk and does not directly solve the oversizing problem.

Resize to a smaller machine type that matches metrics from the last 90 days is less effective than using the built in recommendations because it relies on manual interpretation and an arbitrary lookback window. The managed recommendations already analyze utilization and provide a safe and supported target size, which is the better way to maintain performance while reducing cost.

When a managed database shows sustained underutilization, look for built in rightsizing recommendations first and prefer actions that lower cost while preserving reliability and recovery options.

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

Question 16

When comparing Google Cloud managed NoSQL services, which cost component should you consistently emphasize for a precise TCO analysis?

  • ✓ B. Query execution and index upkeep costs

The correct option is Query execution and index upkeep costs.

Workload driven charges such as query execution and index upkeep are the most variable across services and are tightly coupled to how your application actually uses the database. In Firestore every document write can update multiple indexes which increases index upkeep costs and every read or query incurs operation charges that fall under query execution. In Bigtable you provision for throughput which still maps to demand from query execution. Centering comparisons on query execution and index upkeep costs produces a more accurate and comparable total cost of ownership across architectures.

Storage capacity costs are necessary to include yet they are often similar on a per gigabyte basis and they usually do not capture the effect of access patterns or secondary indexes. Emphasizing query execution and index upkeep costs better reflects real workload behavior.

CMEK for data at rest is primarily a security and compliance choice and any incremental key operation charges are typically minor relative to query execution and index upkeep costs. It should not be the primary basis for TCO comparison.

When modeling costs start with workload metrics such as reads writes and index updates and tie them to pricing. Keep your assumptions for query patterns and indexing consistent across services to compare fairly.

Question 17

You are the database specialist at BlueOrbit Logistics and you must grant access for a group of analysts who need to run queries against Cloud Spanner production databases while they must not write data, change schemas, or administer instances. Which IAM role should you assign to meet this requirement?

  • ✓ B. roles/spanner.databaseReader

The correct option is roles/spanner.databaseReader.

This role grants read-only access to Cloud Spanner databases so analysts can execute queries and read data. It does not permit data modification, schema changes, or instance and database administration, which matches the requirement to run queries without write or administrative capabilities.

The roles/spanner.databaseAdmin role is too permissive because it allows changing schemas and managing databases and backups, and it enables write operations that are not allowed for the analysts.

The roles/spanner.viewer role only provides visibility into resource metadata and configuration. It does not allow running queries against table data, so it fails to meet the need to read production data.

The roles/spanner.databaseUser role allows both reading and writing data. Since the analysts must not write data, this role grants more permissions than required.

Map the verbs in the requirement to the permissions and choose the least privilege role. If you see read only and query needs, prefer a reader role. If you see schema or administration tasks, only then consider an admin role.

Question 18

Your application stores session documents in Firestore and queries them by steps, sessionDuration, and energyBurned. After a 30 day test period, latency is acceptable but costs are high. Which Firestore change would reduce cost while preserving performance and availability?

  • ✓ C. Trim Firestore indexes to only required ones

The correct answer is Trim Firestore indexes to only required ones.

Firestore automatically creates single field indexes for most fields which means each document write updates multiple index entries and drives up write and storage costs. By trimming indexes to only the ones required by your queries on steps, sessionDuration and energyBurned you reduce index write amplification and index storage while preserving the performance of the queries you care about. This is a configuration change that does not reduce availability because Firestore remains fully managed and you keep the essential indexes for your filters and sort orders.

You can delete unused composite indexes and add single field index exemptions for attributes that are not part of your supported query patterns. Monitor index usage and keep only the minimal set that powers production queries so you sustain acceptable latency while lowering cost.

Add Cloud Memorystore cache is not a Firestore change and introduces another paid service. Caching can reduce read latency and offload reads, yet your latency is already acceptable and it does not address Firestore index driven write and storage costs.

Denormalize documents changes the data model and often increases document size and duplicate data which raises storage and write costs. It does not address the underlying cost from unnecessary index updates.

Use batched writes for lower cost does not reduce price because Firestore charges per document write and per index update even inside a batch. Batching primarily improves throughput and client latency rather than billing.

When cost is high but latency is acceptable, review index usage first. Keep only the composite indexes and single field indexes that match your real queries and add exemptions for fields you never filter or sort on.

Question 19

Larkspur Systems is building a browser based workspace where teams coauthor very large documents in real time. Each document can reach 25 MB and the system must handle about 60,000 edits per minute from users spread across three regions and it must keep a complete version timeline for every file with strong consistency. Which managed database on Google Cloud best satisfies these requirements?

  • ✓ B. Cloud Spanner

The correct option is Cloud Spanner because it delivers globally distributed and strongly consistent transactions with high write throughput and can maintain an authoritative version history across regions.

This database scales horizontally and provides synchronous replication across multiple regions which matches the need to sustain about 60,000 edits per minute from users in three regions while preserving strong consistency. Its relational model and transactional SQL make it straightforward to design a versioned schema where each edit or snapshot is stored as a new row with committed ordering so you can reconstruct an exact timeline for every document.

Cloud Bigtable is optimized for massive throughput and low latency for wide column workloads, yet multi cluster replication is eventually consistent across regions which does not satisfy the strong consistency requirement for a global collaboration system. It also lacks multi row transactional semantics which complicates maintaining an authoritative version timeline.

Cloud SQL with read replicas relies on a single primary for writes and uses asynchronous replication to replicas, so cross region reads can be stale and replicas do not increase write capacity. This does not meet strong consistency across regions and it is not ideal for sustained global write rates at this scale.

Firestore in Datastore mode offers strong consistency for queries but enforces an entity size limit of roughly 1 MiB, which cannot accommodate 25 MB documents. While you could store references, the database by itself cannot store documents of that size and is therefore not a fit for the stated requirement.

When a question emphasizes global strong consistency, multi region writes, and high sustained write rates, map it to Cloud Spanner. Quickly eliminate options by checking hard size limits and whether replication is synchronous or asynchronous.

Question 20

How should you export approximately 40 million Firestore documents to BigQuery while ensuring fault tolerance, automatic scaling, and the option to apply transformations?

  • ✓ B. Cloud Dataflow with Apache Beam

The correct option is Cloud Dataflow with Apache Beam.

Dataflow provides a managed runner for Beam pipelines that automatically scales workers to handle very large backfills and it retries failed work units for strong fault tolerance. With Beam you can implement optional transforms to clean or reshape the Firestore documents and then write efficiently to BigQuery using native sinks. This approach is designed for high throughput batch processing and offers robust monitoring and checkpointing so it aligns directly with the scale and reliability needs of exporting tens of millions of records.

BigQuery Data Transfer Service is not suitable because it does not natively pull from Firestore and it focuses on scheduled transfers from supported sources or files in Cloud Storage without offering arbitrary per record transformations or pipeline level fault tolerance and autoscaling for custom ETL.

Firestore export to Cloud Storage and BigQuery load produces backup files that are not directly in a BigQuery friendly tabular format and would require additional processing to transform and flatten nested structures. This path also lacks an orchestrated and fault tolerant compute layer for custom transforms at scale which means it does not meet the autoscaling and reliability requirements by itself.

Cloud Functions streaming inserts is not a good fit for a forty million document batch because function invocations are constrained by quotas and time limits and streaming inserts can hit rate limits and increase costs. It also provides no end to end batch coordination or checkpointing which makes large backfills brittle compared to a managed Beam pipeline.

When you see scale with fault tolerance and autoscaling along with optional transforms think of Dataflow running Beam pipelines that write to BigQuery. Services focused on simple transfers or event triggers usually cannot meet all these needs at once.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.