Google Certified Professional Database Engineer Practice Exams

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

Free GCP Certification Exam Topics Tests

Over the past few months, I’ve been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who have been displaced by AI and ML technologies learn new skills and accreditations by getting them certified on technologies that are in critically high demand.

In my opinion, one of the most reputable organizations providing credentials is Google, and one of their most respected designations is that of the Certified Google Cloud Professional Database Engineer.

So how do you get Google certified and get Google certified quickly? I have a simple plan that has now helped thousands, and it’s a pretty simple strategy.

Google Cloud Certification Practice Exams

First, pick your designation of choice. In this case, it’s Google’s Professional Database Engineer certification.

Then look up the exam objectives and make sure they match your career goals and competencies.

The next step?

It’s not buying an online course or study guide. Next, find a Google Professional Database Engineer exam simulator or a set of practice questions for the GCP Database Engineer exam. Yes, find a set of Database Engineer sample questions first and use them to drive your study.

First, go through your practice tests and just look at the GCP exam questions and answers. That will help you get familiar with what you know and what you don’t know.

When you find topics you don’t know, use AI and Machine Learning powered tools like ChatGPT, Cursor, or Claude to write tutorials for you on the topic.

Really take control of your learning and have the new AI and ML tools help you customize your learning experience by writing tutorials that teach you exactly what you need to know to pass the exam. It’s an entirely new way of learning.

About GCP Exam Dumps

And one thing I will say is try to avoid the Google Cloud Professional Database Engineer exam dumps. You want to get certified honestly, you don’t want to pass simply by memorizing somebody’s GCP Database Engineer braindump. There’s no integrity in that.

If you do want some real Google Cloud Database Engineer exam questions, I have over a hundred free questions and answers on my website, with almost 300 free exam questions and answers if you register. But there are plenty of other great resources available on LinkedIn Learning, Udemy, and even YouTube, so check those resources out as well to help fine-tune your learning path.

The bottom line? Generative AI is changing the IT landscape in disruptive ways, and IT professionals need to keep up. One way to do that is to constantly update your skills.

Get learning, get certified, and stay on top of all the latest trends. You owe it to your future self to stay trained, stay employable, and stay knowledgeable about how to use and apply all of the latest technologies.

Now for the practice exam questions.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Practice Exam Questions

Question 1

A cloud database engineer at Northstar Interactive needs to right size a Google Cloud Memorystore for Redis instance that powers a live trivia feature. Peak load reaches 36,000 requests per second and the cache uses 420 GB of memory. The service must meet 99.95% availability and deliver sub millisecond latency for both reads and writes. Product planning anticipates a 60% increase in requests per second and a 50% growth in cache size over the next 12 months. Which instance configuration will best accommodate the forecast while keeping costs sensible?

  • ❏ A. Standard tier with 1 Gbps network throughput and 700 GB capacity

  • ❏ B. Standard tier with 5 Gbps network throughput and 650 GB capacity

  • ❏ C. Standard tier with 10 Gbps network throughput and 750 GB capacity

  • ❏ D. Basic tier with 2 Gbps network throughput and 700 GB capacity

Question 2

In Cloud SQL, how can you restore to an exact point in time within the last 14 days after an accidental deletion?

  • ❏ A. Configure automated backups with a 14 day retention period

  • ❏ B. Enable automated backups and binary logging for point in time recovery

  • ❏ C. Schedule daily exports to Cloud Storage and import when needed

Question 3

At Trailhead Retail at example.com your team must deploy a managed relational database that uses the PostgreSQL engine on Google Cloud. The application must withstand the loss of an entire zone and continue serving traffic with automatic failover and zero data loss. Which deployment should you implement?

  • ❏ A. Self-managed PostgreSQL on Compute Engine VMs across two zones with a manual failover runbook

  • ❏ B. Cloud Spanner with a multi region instance

  • ❏ C. Cloud SQL for PostgreSQL regional high availability instance with synchronous replication and automatic failover

  • ❏ D. Cloud SQL for PostgreSQL primary in one region with an asynchronous read replica in another region and manual promotion on failure

Question 4

What is the best way to securely schedule daily transfers of approximately 25 TB from on premises file systems to Cloud Storage with minimal operational overhead so that BigQuery can query the data?

  • ❏ A. Transfer Appliance for daily shipments

  • ❏ B. BigQuery Data Transfer Service from on premises to Cloud Storage

  • ❏ C. Storage Transfer Service using on premises agents

  • ❏ D. Cloud Data Fusion scheduled pipeline to Cloud Storage

Question 5

MetroCycle Gear uses Cloud SQL for PostgreSQL to store customer, order, and shipment tables in a normalized schema. Leadership asks you to create a nightly job that exports about 30 million rows across these tables, denormalizes the data by joining and aggregating, and loads the result into a BigQuery dataset. The solution must be efficient, recover automatically from worker failures, and scale without you managing servers. What should you implement?

  • ❏ A. Use Datastream for Cloud SQL to capture changes and drive a Cloud Dataflow template that joins tables and writes the results into BigQuery

  • ❏ B. Schedule a Dataproc Spark job that reads from Cloud SQL, performs the denormalization, and saves the output to BigQuery

  • ❏ C. Build an Apache Beam pipeline on Cloud Dataflow that reads from Cloud SQL with JDBCIO, performs the joins to denormalize, and writes to BigQuery

  • ❏ D. Run gcloud sql export to Cloud Storage and then import into BigQuery with bq load

Question 6

Which Google Cloud database configuration best supports high write throughput, very low read latency, and multirow ACID transactions while controlling cost?

  • ❏ A. AlloyDB for PostgreSQL with read pool replicas

  • ❏ B. Cloud SQL for PostgreSQL with HA and read replicas

  • ❏ C. Cloud Spanner single region

Question 7

At Polar Parcel, a logistics startup, your team uses the Cloud SQL out of disk recommender to review storage trends for mission critical databases over the last 90 days. The platform group relies on these insights to track capacity and act before incidents occur. A new recommendation says the Cloud SQL instance is likely to exhaust its disk space within the next three weeks. How should you respond to this storage warning?

  • ❏ A. Archive older partitions to a separate reporting instance

  • ❏ B. Increase the disk size now or enable storage auto increase

  • ❏ C. Migrate the workload to Bigtable for better compression

  • ❏ D. Implement sharding across multiple Cloud SQL instances

Question 8

For globally distributed microservices that access Cloud Bigtable in four regions, which connectivity approach provides the lowest latency and the most consistent performance?

  • ❏ A. Direct Peering

  • ❏ B. Private IP to the nearest regional Cloud Bigtable cluster

  • ❏ C. Cloud DNS geo routing to a public endpoint

  • ❏ D. Single global VPC

Question 9

BrightWave Audio publishes podcasts for a global audience. Each podcast has a detail page that shows a short summary and a list of episodes that is refreshed every three weeks. All site content resides in a Cloud Spanner database. You must retrieve a podcast and its episode list efficiently and you want to align with Google recommended data modeling practices. How should you design the schema?

  • ❏ A. Increase the number of nodes and raise processing capacity

  • ❏ B. Model Podcast as the parent table and interleave Episode rows beneath it

  • ❏ C. Create a composite secondary index on Episode using podcast_id and publish_date and join at query time

  • ❏ D. Denormalize and store all podcast and episode data in a single wide row

Question 10

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

During maintenance of a Cloud SQL for PostgreSQL instance, how can you reduce downtime and minimize user disruption while continuing to use Cloud SQL? (Choose 2)

  • ❏ A. Implement client side retries with exponential backoff

  • ❏ B. Turn on high availability in Cloud SQL

  • ❏ C. Create a read replica and route reads during maintenance

  • ❏ D. Set a maintenance window during low traffic hours

  • ❏ E. Migrate to Cloud Spanner

Question 11

Norvera Health enforces strict GDPR controls to protect customer data. The data platform team includes three people. Alex manages database provisioning and lifecycle. Priya monitors Cloud SQL instance health and configuration. Mateo is a data analyst who connects to the databases to run queries and pull results. You must assign least privilege Cloud SQL IAM roles to each person to align with compliance. Which role mapping should you choose?

  • ❏ A. Grant roles/cloudsql.editor to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.client to Mateo

  • ❏ B. Grant roles/cloudsql.admin to Alex, roles/cloudsql.editor to Priya, and roles/cloudsql.client to Mateo

  • ❏ C. Grant roles/cloudsql.admin to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.client to Mateo

  • ❏ D. Grant roles/cloudsql.admin to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.editor to Mateo

Question 12

Which Cloud Bigtable deployment best meets a requirement for very low latency and high availability with disaster recovery while supporting approximately 200,000 concurrent sessions and retaining 30 days of time series data?

  • ❏ A. Bigtable multi-cluster within one region across zones

  • ❏ B. Bigtable multi-cluster across continents

  • ❏ C. Bigtable multi-cluster across regions in one continent

  • ❏ D. Bigtable single cluster scaled in one zone

Question 13

BlueCart Analytics has deployed a managed Microsoft SQL Server database on Cloud SQL. The corporate network at headquarters connects to Google Cloud using Cloud VPN. They need employees in the office to connect to the database while ensuring the instance is not reachable from the public internet. What should they do?

  • ❏ A. Create VPC Network Peering between the on premises network and the project VPC

  • ❏ B. Enable Public IP on the Cloud SQL instance and only allow the office IP range with firewall rules

  • ❏ C. Assign a Private IP to the Cloud SQL instance and confirm that routes from the on premises network to the VPC are propagated over Cloud VPN

  • ❏ D. Provision Dedicated Interconnect and keep the Cloud SQL instance using Public IP

Question 14

Which configuration provides strongly consistent reads, maintains Datastore mode, and automatically scales while optimizing cost?

  • ❏ A. Cloud Bigtable

  • ❏ B. Firestore in Native mode with automatic scaling

  • ❏ C. Firestore Datastore mode with autoscaling

Question 15

You are supporting a retail analytics firm that plans to move its transactional database to Google Cloud and you need to build a monthly cost estimate. Which cost drivers should you prioritize when evaluating the total expense of operating the database service?

  • ❏ A. API request rate and median response time

  • ❏ B. Count of Compute Engine instances in the project

  • ❏ C. Provisioned storage size and network egress volume

  • ❏ D. CPU and memory utilization percentages over time

Question 16

Which Google Cloud database provides real time listeners, automatic scaling, and subsecond reads and writes for about 250,000 concurrent clients in a single region?

  • ❏ A. Cloud Spanner

  • ❏ B. Cloud Firestore

  • ❏ C. Memorystore for Redis

  • ❏ D. Cloud SQL for PostgreSQL

Question 17

A language learning startup stores enrollment data in Cloud Spanner in a table named enrollments with columns learner_id, course_id, and locale. The team needs to run frequent lookups for the courses that a specific learner is taking in a specific locale and they want the query to be parameterized for best performance. Which SQL should you use?

  • ❏ A. SELECT course_id FROM enrollments WHERE learner_id LIKE @learnerId AND locale=@locale

  • ❏ B. SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale=@locale

  • ❏ C. SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale=’en_US’

  • ❏ D. SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale LIKE @locale

Question 18

In Cloud SQL for PostgreSQL, what should you do to restore the database to a specific timestamp within the last 30 hours and minimize data loss?

  • ❏ A. Schedule automated backups

  • ❏ B. Turn on point in time recovery on the instance

  • ❏ C. Create a read replica

  • ❏ D. Export to Cloud Storage and import

Question 19

A travel booking startup runs a microservices API on Google Kubernetes Engine that connects to Cloud SQL for PostgreSQL. During marketing campaigns, traffic spikes to 18,000 short-lived connections per minute and the database CPU rises due to frequent connection setup and teardown. The team is evaluating a session pooling layer in front of Cloud SQL. Which statement best explains the primary benefit of adding a session pooler?

  • ❏ A. A session pooler enables streaming analytics over operational data so product teams can run real-time dashboards without additional services

  • ❏ B. A session pooler removes the need to horizontally scale the database by absorbing all bursts so you no longer need additional replicas

  • ❏ C. A session pooler reuses a small number of persistent backend sessions so clients avoid repeated connection setup cost and the database handles more requests with lower latency

  • ❏ D. A session pooler provides encryption of all data at rest in the database to strengthen security compliance

Question 20

In Cloud Spanner, which configuration will maintain availability during the daily peak from 8 PM to 12 AM UTC each day and provide advance maintenance notifications?

  • ❏ A. Automatic maintenance windows with email notifications

  • ❏ B. Automatic maintenance windows with no notifications

  • ❏ C. Custom maintenance window with email notifications

  • ❏ D. Multi-region instance with automatic maintenance

Exam Questions Answered

GCP Google Cloud Database Engineer Badge & Logo Credly

All GCP questions come from my Google DB Engineer Udemy course and certificationexams.pro

Question 1

A cloud database engineer at Northstar Interactive needs to right size a Google Cloud Memorystore for Redis instance that powers a live trivia feature. Peak load reaches 36,000 requests per second and the cache uses 420 GB of memory. The service must meet 99.95% availability and deliver sub millisecond latency for both reads and writes. Product planning anticipates a 60% increase in requests per second and a 50% growth in cache size over the next 12 months. Which instance configuration will best accommodate the forecast while keeping costs sensible?

  • ✓ B. Standard tier with 5 Gbps network throughput and 650 GB capacity

The correct option is Standard tier with 5 Gbps network throughput and 650 GB capacity.

This choice fits the one year forecast while staying cost conscious. Requests per second are expected to grow from 36,000 to 57,600 which is a 60 percent increase. A 5 Gbps network profile provides ample headroom for that level of traffic while helping preserve sub millisecond latency for both reads and writes. Cache memory is forecast to grow from 420 GB to 630 GB which this 650 GB capacity comfortably covers while leaving room for Redis overhead and operational buffers.

The Standard tier is required to meet the high availability target. It provides a highly available deployment with automatic failover across zones which aligns with a 99.95 percent availability requirement. With sufficient throughput and capacity headroom, the service can maintain low latency during peaks and during replication and failover events.

Standard tier with 1 Gbps network throughput and 700 GB capacity does not provide enough network throughput for the projected 57,600 requests per second while sustaining sub millisecond latency, especially when accounting for replication traffic in the Standard tier.

Standard tier with 10 Gbps network throughput and 750 GB capacity would meet performance and capacity needs but it is overprovisioned relative to the forecast and would add unnecessary cost compared to the 5 Gbps and 650 GB option.

Basic tier with 2 Gbps network throughput and 700 GB capacity cannot meet the availability requirement because Basic tier is zonal and lacks automatic failover, so it does not provide the required reliability even if its capacity appears sufficient.

Translate the stated growth into future load and size for both memory and throughput, then pick the smallest option that exceeds those needs with some headroom. Map the required availability to the correct tier and avoid Basic when an SLA is specified.

Question 2

In Cloud SQL, how can you restore to an exact point in time within the last 14 days after an accidental deletion?

  • ✓ B. Enable automated backups and binary logging for point in time recovery

The correct option is Enable automated backups and binary logging for point in time recovery.

Cloud SQL point in time recovery uses an automated backup as a base and replays binary or transaction logs to reach an exact timestamp. When both features are enabled you can restore to any moment that falls within your configured log retention window which can cover the stated recovery period if retention is set appropriately.

Configure automated backups with a 14 day retention period is not sufficient by itself because backups alone only allow restoring to the time of each backup. You cannot replay changes to an exact moment without the binary or transaction logs.

Schedule daily exports to Cloud Storage and import when needed does not provide point in time recovery. Exports are logical backups that restore only to the moment of the export and they can lose up to a day of changes and require more manual effort.

When you see the phrase exact point in time for Cloud SQL think of automated backups plus logs. Backups alone restore only to backup times while daily exports are not point in time.

Question 3

At Trailhead Retail at example.com your team must deploy a managed relational database that uses the PostgreSQL engine on Google Cloud. The application must withstand the loss of an entire zone and continue serving traffic with automatic failover and zero data loss. Which deployment should you implement?

  • ✓ C. Cloud SQL for PostgreSQL regional high availability instance with synchronous replication and automatic failover

The correct deployment is Cloud SQL for PostgreSQL regional high availability instance with synchronous replication and automatic failover.

This choice runs the PostgreSQL engine as a fully managed service and places a primary and standby in different zones within the same region. Synchronous replication ensures zero data loss when a zone fails and automatic failover keeps the application serving traffic without manual intervention. It directly satisfies the requirement to survive the loss of an entire zone with no data loss.

Self-managed PostgreSQL on Compute Engine VMs across two zones with a manual failover runbook is not a managed service and requires human action to fail over. It does not guarantee zero data loss and it adds operational risk and complexity.

Cloud Spanner with a multi region instance does not run the PostgreSQL engine even though it offers a PostgreSQL interface. The requirement explicitly calls for PostgreSQL engine compatibility which this option does not meet.

Cloud SQL for PostgreSQL primary in one region with an asynchronous read replica in another region and manual promotion on failure uses asynchronous replication which can lose data during failover and it relies on manual promotion. It therefore fails both the zero data loss and automatic failover requirements.

Map the requirement to the data protection objective. A need for zero data loss implies synchronous replication and a need for continuity implies automatic failover. Then confirm the exact engine requirement before choosing a service.

Question 4

What is the best way to securely schedule daily transfers of approximately 25 TB from on premises file systems to Cloud Storage with minimal operational overhead so that BigQuery can query the data?

  • ✓ C. Storage Transfer Service using on premises agents

The correct option is Storage Transfer Service using on premises agents.

This service provides secure scheduled transfers from on premises file systems directly into Cloud Storage with very little operational overhead. You can deploy lightweight agents behind your firewall, define schedules and concurrency, and the service will handle parallelization, retries, and incremental sync. Moving the data into Cloud Storage enables BigQuery to query it through external tables or you can later load the data into BigQuery if needed.

Transfer Appliance for daily shipments is designed for one time or infrequent bulk migrations using a physical device that you ship back to Google. It is not suited for daily recurring transfers and would add significant operational effort and latency.

BigQuery Data Transfer Service from on premises to Cloud Storage is not intended to pull data from on premises file systems into Cloud Storage. It schedules transfers into BigQuery from supported sources and from Cloud Storage into BigQuery, so it does not meet the requirement to move files into Cloud Storage from on premises.

Cloud Data Fusion scheduled pipeline to Cloud Storage is an ETL service that can write to Cloud Storage, yet it is not optimized for high volume file system mirroring at this scale. It would require more pipeline engineering and administration and it lacks the purpose built file transfer capabilities such as agent based discovery, high throughput sync, and operational simplicity that the transfer service provides.

When you see recurring on premises to Cloud Storage movement with minimal operations look for Storage Transfer Service with agents. Choose Transfer Appliance for one time bulk migrations and use BigQuery Data Transfer Service only when loading data directly to BigQuery.

Question 5

MetroCycle Gear uses Cloud SQL for PostgreSQL to store customer, order, and shipment tables in a normalized schema. Leadership asks you to create a nightly job that exports about 30 million rows across these tables, denormalizes the data by joining and aggregating, and loads the result into a BigQuery dataset. The solution must be efficient, recover automatically from worker failures, and scale without you managing servers. What should you implement?

  • ✓ C. Build an Apache Beam pipeline on Cloud Dataflow that reads from Cloud SQL with JDBCIO, performs the joins to denormalize, and writes to BigQuery

The correct option is Build an Apache Beam pipeline on Cloud Dataflow that reads from Cloud SQL with JDBCIO, performs the joins to denormalize, and writes to BigQuery. This gives you a fully managed and scalable batch pipeline that can process tens of millions of rows, recover automatically from worker failures, and requires no server management.

Dataflow provides built in autoscaling and resilient execution for batch pipelines so failed workers are retried and the job continues without manual intervention. Using Beam JdbcIO you can read from Cloud SQL with partitioned queries to parallelize extraction, then perform multi table joins and aggregations in the pipeline, and finally write the results efficiently to BigQuery with the native sink. This aligns directly with a nightly denormalization workflow and keeps the entire process in one robust pipeline.

Use Datastream for Cloud SQL to capture changes and drive a Cloud Dataflow template that joins tables and writes the results into BigQuery targets change data capture for incremental replication rather than creating a consistent nightly snapshot across multiple tables with joins and aggregations. Datastream adds streaming CDC complexity that is unnecessary for this batch snapshot requirement.

Schedule a Dataproc Spark job that reads from Cloud SQL, performs the denormalization, and saves the output to BigQuery typically involves provisioning and managing clusters, which does not meet the requirement to scale without managing servers. Handling failures and capacity on Dataproc is more operationally involved than using Dataflow for this batch job.

Run gcloud sql export to Cloud Storage and then import into BigQuery with bq load only moves raw tables and does not perform the necessary joins and aggregations. You would still need an additional transformation step after loading, and the export and import path is less efficient and resilient than a single managed pipeline for 30 million rows.

When you see requirements for automatic recovery and no server management together with large batch joins, prefer Cloud Dataflow with JDBC reads and BigQuery writes over CDC or cluster based approaches.

Question 6

Which Google Cloud database configuration best supports high write throughput, very low read latency, and multirow ACID transactions while controlling cost?

  • ✓ C. Cloud Spanner single region

The correct option is Cloud Spanner single region.

This service is designed for very high write throughput because it horizontally scales writes by automatically sharding data and distributing load across nodes. It delivers very low read latency in a single region because requests stay within one region while maintaining strongly consistent reads by default. It also supports multi row and multi table ACID transactions with strict consistency, and the single region configuration helps control cost compared to multi region while allowing granular scaling of capacity.

AlloyDB for PostgreSQL with read pool replicas relies on a single primary for writes, so write throughput cannot scale horizontally in the same way. The read pool replicas can provide low read latency for read heavy workloads, yet they do not change the single writer architecture and replicas are not meant for strongly consistent reads of the latest data, which makes it less suitable when both high write throughput and very low latency strongly consistent reads are required.

Cloud SQL for PostgreSQL with HA and read replicas provides ACID transactions and improved availability, but it is limited to a single primary for writes, which constrains write throughput at scale. Its read replicas are asynchronous, so they cannot guarantee up to date reads, and this option does not meet the combined need for very high write throughput with very low latency strongly consistent reads.

Map requirements to service characteristics. If you need horizontal write scaling, very low latency strongly consistent reads, and multi row ACID with cost control, think single region Spanner first.

Question 7

At Polar Parcel, a logistics startup, your team uses the Cloud SQL out of disk recommender to review storage trends for mission critical databases over the last 90 days. The platform group relies on these insights to track capacity and act before incidents occur. A new recommendation says the Cloud SQL instance is likely to exhaust its disk space within the next three weeks. How should you respond to this storage warning?

  • ✓ B. Increase the disk size now or enable storage auto increase

The correct option is Increase the disk size now or enable storage auto increase.

The out of disk recommender is forecasting imminent capacity exhaustion, so the safest response is to add headroom immediately. You can increase the disk size now or enable storage auto increase so that the instance can scale storage before it runs out and avoid an outage. Cloud SQL supports online storage increases and the storage auto increase setting grows the disk automatically when usage approaches the threshold, which directly addresses the warning within the stated three week window.

Archive older partitions to a separate reporting instance is not a reliable immediate fix for an impending out of disk event. Even if you remove data, Cloud SQL storage cannot be decreased and reclaimed at the disk level, and cleanup operations may not free space quickly enough to prevent the incident.

Migrate the workload to Bigtable for better compression is inappropriate for a relational Cloud SQL workload because Bigtable is a NoSQL wide column database. A migration would be complex and time consuming and it does not solve the near term capacity risk.

Implement sharding across multiple Cloud SQL instances adds significant operational complexity and requires application changes. It is not a timely mitigation for a forecasted storage shortfall in the next few weeks.

When a recommender predicts an imminent resource shortfall, choose the action that directly increases capacity in the same service. Features like automatic scaling settings or a quick resize are usually the most reliable and timely fixes.

Question 8

For globally distributed microservices that access Cloud Bigtable in four regions, which connectivity approach provides the lowest latency and the most consistent performance?

  • ✓ B. Private IP to the nearest regional Cloud Bigtable cluster

The correct option is Private IP to the nearest regional Cloud Bigtable cluster.

This approach keeps traffic on the Google backbone and avoids the public internet which delivers lower latency and more consistent performance. You deploy a replicated Cloud Bigtable instance with a cluster in each of the four regions and connect from microservices in each region to the local cluster over a private endpoint. You can use a Bigtable app profile with multi cluster routing so clients are automatically routed to the nearest available cluster which further stabilizes latency and availability.

Direct Peering is intended for connecting on premises networks to Google and it does not optimize service to service traffic within Google Cloud or select the closest Bigtable cluster. It can still involve public endpoints and does not guarantee the lowest latency for regional access patterns.

Cloud DNS geo routing to a public endpoint is not suitable because Cloud Bigtable clients do not use customer managed DNS names for per cluster selection and public endpoints traverse the public internet which can add jitter and higher latency. Geo routing also does not integrate with Bigtable app profiles that natively route clients to the closest cluster.

Single global VPC does not solve the latency problem by itself because Cloud Bigtable is a regional service. Cross region requests would still traverse inter regional links and you would not achieve the lowest latency without placing clusters in each region and accessing them locally over private connectivity.

When you see a global low latency requirement for a regional database, look for answers that keep traffic on the provider backbone and place clients near data. For Cloud Bigtable that usually means replication with private IP access and an app profile that routes to the nearest cluster.

Question 9

BrightWave Audio publishes podcasts for a global audience. Each podcast has a detail page that shows a short summary and a list of episodes that is refreshed every three weeks. All site content resides in a Cloud Spanner database. You must retrieve a podcast and its episode list efficiently and you want to align with Google recommended data modeling practices. How should you design the schema?

  • ✓ B. Model Podcast as the parent table and interleave Episode rows beneath it

The correct answer is Model Podcast as the parent table and interleave Episode rows beneath it.

This design follows Cloud Spanner guidance for one to many hierarchical relationships where child rows are almost always read with their parent. Interleaving physically colocates Episode rows under their Podcast parent using the parent key prefix. A read of a podcast and its episodes becomes a single efficient range scan with strong locality which minimizes cross partition work and network hops. The content changes infrequently and the set of children per podcast is naturally bounded which makes interleaving a good fit for predictable performance.

Using interleaving also simplifies queries because you can fetch the parent and children together without additional joins or index lookups. It aligns with recommended schema modeling for read heavy patterns that require parent with children access and it preserves transactional consistency within the same locality.

Increase the number of nodes and raise processing capacity is not a data modeling solution and it does not address per request latency or reduce the need for cross partition joins. More nodes can increase throughput but it does not make fetching a podcast and its episodes more efficient when the data is not colocated.

Create a composite secondary index on Episode using podcast_id and publish_date and join at query time can help list episodes by podcast and date. However you still need to join to the Podcast table to fetch the podcast details and you may perform additional lookups unless the index is fully covering. This adds complexity and may still span partitions. Interleaving is the recommended approach when you frequently read the parent with its children together.

Denormalize and store all podcast and episode data in a single wide row does not fit Spanner well because episodes can grow without bound and this risks row size limits and large write amplification. It also complicates partial updates and increases contention. The relational model with an interleaved child table is the recommended and scalable pattern.

When a parent and its children are read together most of the time, think about interleaving. If you need alternate sort orders or to read children without the parent, think about secondary indexes. Match the schema to the dominant access pattern.

Question 10

During maintenance of a Cloud SQL for PostgreSQL instance, how can you reduce downtime and minimize user disruption while continuing to use Cloud SQL? (Choose 2)

  • ✓ B. Turn on high availability in Cloud SQL

  • ✓ D. Set a maintenance window during low traffic hours

The correct options are Turn on high availability in Cloud SQL and Set a maintenance window during low traffic hours.

Using high availability in Cloud SQL creates a standby in another zone and allows planned maintenance to complete with a controlled failover. This limits downtime to the failover interval so your application experiences only a brief interruption while continuing to use Cloud SQL.

Configuring a maintenance window tells Google to apply disruptive updates during the period you choose when traffic is lowest. This does not remove the brief restart that maintenance may require but it aligns the impact with a time that least affects users.

Implement client side retries with exponential backoff helps with transient errors in applications but it does not reduce or avoid the database instance downtime that occurs during maintenance so users can still see failures.

Create a read replica and route reads during maintenance is not sufficient because Cloud SQL read replicas are read only and cannot serve writes and there is no automatic failover to a read only node during planned maintenance. Replicas may also be restarted during maintenance which means this does not reliably minimize disruption.

Migrate to Cloud Spanner changes the product and architecture and the question requires continuing to use Cloud SQL so this does not address the stated goal.

When asked to minimize disruption during Cloud SQL maintenance, look first for native features like high availability and a maintenance window since these directly control failover behavior and scheduling rather than relying on application workarounds.

Question 11

Norvera Health enforces strict GDPR controls to protect customer data. The data platform team includes three people. Alex manages database provisioning and lifecycle. Priya monitors Cloud SQL instance health and configuration. Mateo is a data analyst who connects to the databases to run queries and pull results. You must assign least privilege Cloud SQL IAM roles to each person to align with compliance. Which role mapping should you choose?

  • ✓ C. Grant roles/cloudsql.admin to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.client to Mateo

The correct choice is Grant roles/cloudsql.admin to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.client to Mateo.

Alex owns provisioning and lifecycle which requires full administrative control of Cloud SQL instances. The Cloud SQL Admin role lets a user create and delete instances, modify configuration, manage replicas, backups, and maintenance settings, and perform other administrative actions that are essential for end to end lifecycle management.

Priya only needs to observe health and configuration without making changes. The Cloud SQL Viewer role provides read only access to instance metadata, settings, and monitoring information so it fits least privilege for monitoring duties.

Mateo must connect to databases to run queries but he should not manage instances. The Cloud SQL Client role allows establishing connections through the Cloud SQL Auth Proxy or using private IP without granting management permissions. Actual query permissions are controlled inside the database engine with database accounts and are not provided by Cloud IAM.

Grant roles/cloudsql.editor to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.client to Mateo is not appropriate because the editor role does not provide full administrative capability for complete lifecycle tasks such as instance creation and deletion, which Alex requires.

Grant roles/cloudsql.admin to Alex, roles/cloudsql.editor to Priya, and roles/cloudsql.client to Mateo gives Priya permissions to modify instances rather than only view them, which violates least privilege for a monitoring responsibility.

Grant roles/cloudsql.admin to Alex, roles/cloudsql.viewer to Priya, and roles/cloudsql.editor to Mateo grants Mateo modification rights to instances even though he only needs to connect, which exceeds least privilege and is unnecessary for analytics work.

Map responsibilities to predefined roles and think in terms of least privilege. Instance administrators need admin, observers need viewer, and users who only connect need client. Remember that client enables connection while data access is granted inside the database.

Question 12

Which Cloud Bigtable deployment best meets a requirement for very low latency and high availability with disaster recovery while supporting approximately 200,000 concurrent sessions and retaining 30 days of time series data?

  • ✓ C. Bigtable multi-cluster across regions in one continent

The correct option is Bigtable multi-cluster across regions in one continent.

This deployment uses replication across multiple regions in the same continent with multi-cluster routing so traffic is served by the nearest cluster. That keeps user-facing latency very low while replication across regions provides failover and disaster recovery if a region becomes unavailable. Cloud Bigtable scales horizontally to handle hundreds of thousands of concurrent sessions and it is a strong fit for time series workloads with 30 days of retention.

Bigtable multi-cluster within one region across zones improves availability for zonal failures but it does not protect against a regional outage. It does not meet the disaster recovery requirement.

Bigtable multi-cluster across continents provides broad geographic redundancy but the intercontinental distance increases network latency and replication delay. This works against the need for very low latency when the users are served within one continent.

Bigtable single cluster scaled in one zone can scale up but it is a single point of failure. It does not provide high availability or disaster recovery and a zonal or regional event would cause downtime.

Map the geographical scope of users to the replication scope. If the question asks for very low latency and disaster recovery for one continent then choose multi-cluster replication across regions within that continent rather than cross continent or single region designs.

Question 13

BlueCart Analytics has deployed a managed Microsoft SQL Server database on Cloud SQL. The corporate network at headquarters connects to Google Cloud using Cloud VPN. They need employees in the office to connect to the database while ensuring the instance is not reachable from the public internet. What should they do?

  • ✓ C. Assign a Private IP to the Cloud SQL instance and confirm that routes from the on premises network to the VPC are propagated over Cloud VPN

The correct choice is Assign a Private IP to the Cloud SQL instance and confirm that routes from the on premises network to the VPC are propagated over Cloud VPN. This keeps the database off the public internet while allowing office users to connect through the existing VPN.

Using a private IP ensures the Cloud SQL instance has no public endpoint and is only reachable within the VPC and any connected hybrid networks. With Cloud VPN and Cloud Router using appropriate routing, the on premises network learns routes to the VPC subnets and to the allocated private services range that backs Cloud SQL private IP. This provides secure access from the office while meeting the requirement that the instance is not publicly reachable.

Create VPC Network Peering between the on premises network and the project VPC is not possible because VPC Network Peering only connects Google Cloud VPC networks. On premises connectivity must use Cloud VPN or Cloud Interconnect.

Enable Public IP on the Cloud SQL instance and only allow the office IP range with firewall rules does not meet the requirement because the instance would still be exposed on the public internet. In addition, Cloud SQL public IP access is controlled by authorized networks on the instance rather than VPC firewall rules.

Provision Dedicated Interconnect and keep the Cloud SQL instance using Public IP also fails the requirement since a public endpoint would remain accessible from the internet. Interconnect provides private transport but it does not convert a public service endpoint into a private one.

When you see a requirement that a managed database must not be reachable from the internet, think Private IP first and then verify hybrid connectivity with Cloud VPN or Interconnect and proper routing.

Question 14

Which configuration provides strongly consistent reads, maintains Datastore mode, and automatically scales while optimizing cost?

  • ✓ C. Firestore Datastore mode with autoscaling

The correct option is Firestore Datastore mode with autoscaling.

Firestore Datastore mode with autoscaling preserves the Datastore API and data model while running on the Firestore backend. It provides strongly consistent reads for entity lookups and ancestor queries which satisfies the requirement for strong consistency. Firestore Datastore mode with autoscaling also scales automatically without capacity planning and its pay as you go pricing helps optimize cost for typical transactional workloads.

Cloud Bigtable does not preserve Datastore mode or its APIs and data model. It is a low latency wide column database suited for large analytical or time series workloads and while it can autoscale it does not meet the requirement to keep Datastore mode which makes it a poor fit here.

Firestore in Native mode with automatic scaling offers strongly consistent queries and automatic scaling but choosing Native mode does not preserve Datastore mode. It changes the API surface and indexing behavior which violates the requirement to keep Datastore mode.

When a question stresses preserve Datastore mode prioritize options that keep the Datastore API and semantics. Then verify that the choice offers automatic scaling and the needed consistency so you do not trade compatibility for performance.

Question 15

You are supporting a retail analytics firm that plans to move its transactional database to Google Cloud and you need to build a monthly cost estimate. Which cost drivers should you prioritize when evaluating the total expense of operating the database service?

  • ✓ C. Provisioned storage size and network egress volume

The correct option is Provisioned storage size and network egress volume.

These two are billed directly by how much capacity you allocate and by how many gigabytes leave Google Cloud. They often dominate monthly spend for a transactional database because disk allocation runs continuously and outbound traffic is charged as data moves to the internet or to other regions. Right sizing capacity and minimizing external data transfer help control these costs, and backup retention choices can add to the storage portion as well.

API request rate and median response time are performance indicators that help with tuning and capacity planning, yet they are not billable line items for managed database services. Improving or worsening these metrics does not by itself change the invoice.

Count of Compute Engine instances in the project does not determine the database service cost. Managed databases such as Cloud SQL or AlloyDB are billed for their own resources regardless of how many virtual machines exist elsewhere in the project.

CPU and memory utilization percentages over time are useful for right sizing but they do not directly drive charges in provisioned services. Costs are based on what you allocate rather than how busy the resources are.

When a cost question appears, map each option to specific billing line items on the pricing page. Prioritize resources metered by provisioned capacity and by egress volume, and treat utilization and performance metrics as sizing signals rather than direct costs.

Question 16

Which Google Cloud database provides real time listeners, automatic scaling, and subsecond reads and writes for about 250,000 concurrent clients in a single region?

  • ✓ B. Cloud Firestore

The correct option is Cloud Firestore.

This service provides real time listeners that push document changes to connected clients without polling. It automatically scales throughput and storage to handle sudden spikes in traffic and it delivers very low latency reads and writes. These capabilities align with supporting hundreds of thousands of concurrent clients in a single region.

Cloud Spanner is a horizontally scalable relational database that offers strong consistency and global replication, yet it does not provide client real time listeners and is not designed for that style of event driven connectivity at massive client counts.

Memorystore for Redis is an in memory data store and cache that can offer very low latency and supports pub or sub patterns, but it is not a document database with built in real time listeners and automatic elastic scaling for very large numbers of concurrent client connections.

Cloud SQL for PostgreSQL is a managed relational database that focuses on transactional workloads with SQL and has connection limits and vertical scaling characteristics, so it is not intended for real time listener patterns or hundreds of thousands of concurrent clients.

When you see real time listeners together with automatic scaling and very large numbers of concurrent clients, map the scenario to serverless document stores rather than relational or cache services.

Question 17

A language learning startup stores enrollment data in Cloud Spanner in a table named enrollments with columns learner_id, course_id, and locale. The team needs to run frequent lookups for the courses that a specific learner is taking in a specific locale and they want the query to be parameterized for best performance. Which SQL should you use?

  • ✓ B. SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale=@locale

The correct option is SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale=@locale.

This query meets the requirement to parameterize both the learner and the locale so it can reuse query plans and avoid recompilation. It also uses equality filters which align with exact lookups and allow the optimizer to use appropriate indexing strategies on learner_id and locale for fast retrieval.

SELECT course_id FROM enrollments WHERE learner_id LIKE @learnerId AND locale=@locale is incorrect because LIKE is for pattern matching rather than exact equality. It can lead to less efficient execution and unexpected matches and it is unnecessary when you need an exact learner identifier.

SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale=’en_US’ is incorrect because it hardcodes the locale and does not parameterize it. This does not satisfy the requirement for a parameterized query and it reduces flexibility and plan reuse across different locales.

SELECT course_id FROM enrollments WHERE learner_id=@learnerId AND locale LIKE @locale is incorrect for the same reason as the earlier LIKE case. You need an exact match on locale and LIKE introduces pattern matching behavior and can hinder optimal use of indexes.

When the requirement is an exact lookup use equality predicates on the filter columns and pass values as parameters with @name. Avoid using LIKE unless pattern matching is truly needed.

Question 18

In Cloud SQL for PostgreSQL, what should you do to restore the database to a specific timestamp within the last 30 hours and minimize data loss?

  • ✓ B. Turn on point in time recovery on the instance

To restore to a specific timestamp within the past 30 hours with minimal data loss, the correct choice is Turn on point in time recovery on the instance.

This feature continuously retains the write ahead logs so you can restore the database to an exact moment within the configured retention window, which covers a 30 hour target. You trigger a restore that creates a new instance at the selected time while leaving the original instance intact. Automated backups must be enabled because they are a prerequisite for this capability, yet they are not sufficient on their own.

Schedule automated backups only restores to the time of a backup, so you cannot choose an arbitrary timestamp and you risk more data loss compared to using this feature.

Create a read replica provides read scaling and high availability for reads, but it does not let you roll back to a prior point in time and it will replicate most errors from the primary.

Export to Cloud Storage and import is a manual snapshot of data at export time, so it cannot restore to a precise timestamp unless the export happened exactly then and it typically results in more downtime and data loss.

When you see the phrases specific timestamp and minimize data loss, map them to point in time recovery rather than backups, replicas, or manual exports.

Question 19

A travel booking startup runs a microservices API on Google Kubernetes Engine that connects to Cloud SQL for PostgreSQL. During marketing campaigns, traffic spikes to 18,000 short-lived connections per minute and the database CPU rises due to frequent connection setup and teardown. The team is evaluating a session pooling layer in front of Cloud SQL. Which statement best explains the primary benefit of adding a session pooler?

  • ✓ C. A session pooler reuses a small number of persistent backend sessions so clients avoid repeated connection setup cost and the database handles more requests with lower latency

The correct option is A session pooler reuses a small number of persistent backend sessions so clients avoid repeated connection setup cost and the database handles more requests with lower latency.

Frequent creation and teardown of database connections is expensive because authentication, TLS negotiation, and backend session initialization consume CPU and time. Reusing a small set of long lived sessions avoids this overhead for each request and lets Cloud SQL spend more cycles executing queries. This improves throughput during bursts and reduces tail latency for microservices that open many short lived connections.

A session pooler enables streaming analytics over operational data so product teams can run real-time dashboards without additional services is incorrect because pooling does not provide analytics capabilities. Streaming analytics requires dedicated services and patterns that are separate from connection management.

A session pooler removes the need to horizontally scale the database by absorbing all bursts so you no longer need additional replicas is incorrect because pooling smooths connection churn but it does not add CPU, memory, or IOPS capacity. You may still need vertical or horizontal scaling to meet sustained load.

A session pooler provides encryption of all data at rest in the database to strengthen security compliance is incorrect because encryption at rest is provided by Cloud SQL itself. A pooler manages connections and can help with in transit behavior but it does not change storage encryption.

When traffic spikes create many short lived connections, look for answers that emphasize reusing persistent connections and reducing connection overhead rather than claims about analytics, security features, or eliminating scaling needs.

Question 20

In Cloud Spanner, which configuration will maintain availability during the daily peak from 8 PM to 12 AM UTC each day and provide advance maintenance notifications?

  • ✓ C. Custom maintenance window with email notifications

The correct option is Custom maintenance window with email notifications. This configuration lets you schedule planned maintenance outside the 8 PM to 12 AM UTC peak period and ensures you receive advance notices by email so you can prepare and avoid disruption.

Cloud Spanner allows you to define a custom maintenance window so planned maintenance is performed during your preferred off-peak hours. By aligning the window to times when traffic is low you reduce the risk of performance impact during your daily peak.

Email notifications are delivered through configured project contacts so your team gets advance maintenance alerts. This helps with planning changes or temporarily adjusting capacity to handle any residual effects.

Automatic maintenance windows with email notifications does not meet the requirement to ensure availability during the daily peak because the timing is controlled by Google and could occur during 8 PM to 12 AM UTC even though you would be notified.

Automatic maintenance windows with no notifications neither gives you control over when maintenance occurs nor provides advance notice which makes it unsuitable for protecting a known busy period.

Multi-region instance with automatic maintenance improves resilience and availability for failures but it does not control when maintenance happens and it does not by itself provide advance email notices. Maintenance could still overlap with the peak period and you would not meet the notification requirement unless you also configure contacts.

When a scenario mentions a busy time window, prioritize features that control timing and pair them with notifications. For Spanner this usually means a custom maintenance window and email notices via Essential Contacts.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.