Google Cloud Developer Certification Practice Exams

All GCP questions come from my Google Developer Udemy course and certificationexams.pro
Free GCP Certification Exam Topics Tests
Over the past few months, I’ve been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who have been displaced by AI and ML technologies learn new skills and accreditations by getting them certified on technologies that are in critically high demand.
In my opinion, one of the most reputable organizations providing credentials is Google, and one of their most respected designations is that of the Certified Google Cloud Professional Developer — also known as the GCP Professional Developer Certification or Google Professional Developer Certification.
So how do you get Google certified and get Google certified quickly? I have a simple plan that has now helped thousands, and it’s a straightforward strategy.
Google Cloud Certification Practice Exams
First, pick your designation of choice. In this case, it’s Google’s Professional Developer certification.
Then look up the exam objectives and make sure they match your career goals and competencies.
The next step?
It’s not buying an online course or study guide. Next, find a Google Professional Developer exam simulator or a set of practice questions for the GCP Developer exam. Yes, find a set of Developer sample questions first and use them to drive your study.
First, go through your practice tests and just look at the GCP exam questions and answers. That will help you get familiar with what you know and what you don’t know.
When you find topics you don’t know, use AI and Machine Learning powered tools like ChatGPT, Cursor, or Claude to write tutorials for you on the topic.
Really take control of your learning and have the new AI and ML tools help you customize your learning experience by writing tutorials that teach you exactly what you need to know to pass the exam. It’s an entirely new way of learning.
About GCP Exam Dumps
And one thing I will say is try to avoid the Google Cloud Professional Developer exam dumps. You want to get certified honestly, you don’t want to pass simply by memorizing somebody’s GCP Developer braindump. There’s no integrity in that.
If you do want some real Google Cloud Developer exam questions, I have over a hundred free exam questions and answers on my website, with almost 300 free exam questions and answers if you register. But there are plenty of other great resources available on LinkedIn Learning, Udemy, and even YouTube, so check those resources out as well to help fine-tune your learning path.
The bottom line? Generative AI is changing the IT landscape in disruptive ways, and IT professionals need to keep up. One way to do that is to constantly update your skills.
Get learning, get certified, and stay on top of all the latest trends. You owe it to your future self to stay trained, stay employable, and stay knowledgeable about how to use and apply all of the latest technologies.
Now for the GCP Certified Developer exam questions.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Google Cloud Developer Professional Practice Exam
Question 1
A fintech startup named AlderPay runs a microservice on Google Kubernetes Engine that must read a database credential that rotates every 45 days. The team wants a straightforward approach that keeps the value encrypted both at rest and when accessed by the workload while minimizing operational effort. What should you do?
-
❏ A. Encrypt the password using Cloud KMS and store the ciphertext in a ConfigMap then have the container decrypt it at startup
-
❏ B. Use Google Cloud Secret Manager and grant the Kubernetes service account access through Workload Identity
-
❏ C. Create a Kubernetes Secret and expose it to the pod through environment variables
-
❏ D. Place the credential in a ConfigMap and reference it from the pod specification
Question 2
Which approach minimizes risk and enables steady releases when migrating a monolith to microservices on Google Cloud?
-
❏ A. Build a new microservices platform and cut over when complete
-
❏ B. Lift and shift the monolith to GKE without decomposing services
-
❏ C. Incrementally replace features with microservices while the monolith coexists during staged releases
-
❏ D. Big bang refactor into microservices in a single release
Question 3
You oversee a payments API at Nuvexa Labs that runs in a production Google Kubernetes Engine cluster and you release updates with Cloud Deploy. Engineers push frequent small changes each day, and you want a local workflow that automatically rebuilds and redeploys on edits and that uses a lightweight Kubernetes environment on a laptop so it closely mirrors production. Which tools should you choose to build images and run containers locally while keeping resource usage low?
-
❏ A. kaniko and Tekton
-
❏ B. Minikube and Skaffold
-
❏ C. Docker Compose and dockerd
-
❏ D. Terraform and kubeadm
Question 4
How should you configure a Cloud Bigtable app profile to support automatic failover and efficient routing across two replicated clusters without manual intervention?
-
❏ A. Memorystore for Memcached
-
❏ B. Enable client side retries with backoff
-
❏ C. Use multi cluster routing in the app profile
-
❏ D. Keep single cluster routing with a second cluster
Question 5
You are developing a module at Meadowbrook Apparel that records hourly shift submissions in an internal workforce system and needs to notify several independent payroll, compliance, and analytics services. The design must keep producers and consumers loosely coupled and should allow new processing steps to be added next quarter while handling about 300 submissions per minute. What approach will provide reliable communication with downstream services and keep future additions simple?
-
❏ A. Use Cloud Tasks with a queue for each step to push HTTP requests to every downstream system
-
❏ B. Create a separate Pub/Sub topic for every step and have each team subscribe only to its own step
-
❏ C. Build a microservice on Google Kubernetes Engine that invokes downstream steps in sequence and waits for each response
-
❏ D. Publish a shift.submitted event to a single Pub/Sub topic and let each downstream team create and manage its own subscription
Question 6
Which GKE configuration enables stateful workloads with durable volumes and ordered rolling updates without downtime?
-
❏ A. Deployment with PVC and RollingUpdate
-
❏ B. StatefulSet with PVC and RollingUpdate
-
❏ C. DaemonSet with local SSD and manual upgrades
-
❏ D. Deployment with Recreate and persistent disk
Question 7
SkyTrail Analytics runs a BigQuery warehouse with about 3 PB of customer events. The company is entering the European Union market and must meet two obligations. Users must be able to have their account data erased upon request, and any data belonging to EU users must be stored only in an EU region. What actions should you take to satisfy these obligations? (Choose 2)
-
❏ A. Store EU user records in an EU Cloud Storage bucket and query them using external tables
-
❏ B. Use BigQuery federated queries to read from a US Cloud Storage bucket so that data is not duplicated
-
❏ C. Run BigQuery DML such as DELETE and UPDATE to remove or modify user records on demand
-
❏ D. Build a Dataflow pipeline that republishes sanitized snapshots to a US dataset every week
-
❏ E. Create a separate BigQuery dataset in the EU and migrate or write EU user data there
Question 8
Which connection method ensures the most secure private connectivity from GKE to Cloud SQL for PostgreSQL without using the public internet?
-
❏ A. Public IP with authorized networks
-
❏ B. Private IP with Cloud SQL Auth Proxy sidecar
-
❏ C. Public IP with SSL/TLS
-
❏ D. Private IP with direct app connection
Question 9
A development team at Northstar Retail is standardizing container builds for services that will run on Cloud Run and Google Kubernetes Engine. To reduce the attack surface and lower the chance of bundling vulnerable components, what type of operating system base image should they select when creating their container images?
-
❏ A. Pick a community image that is not validated by a trusted publisher
-
❏ B. Use a fully loaded image that includes many common libraries and utilities
-
❏ C. Choose a minimal base image that contains only the libraries that the service requires
-
❏ D. Ubuntu LTS base image
Question 10
How can you build a container from source without a Dockerfile and publish the image to a central Artifact Registry repository in Google Cloud? (Choose 2)
-
❏ A. Skaffold build
-
❏ B. gcloud run deploy with –source
-
❏ C. Push source code to Artifact Registry
-
❏ D. pack build with buildpacks

All GCP questions come from my Google Developer Udemy course and certificationexams.pro
Question 11
BlueRidge Artworks plans to move its media upload API from a data center to App Engine standard and the service must allow single file uploads up to 750 MB while keeping App Engine instances out of the data path and still applying authorization and a size limit. What should you implement?
-
❏ A. Send file data through Cloud Pub/Sub and have a subscriber write to storage
-
❏ B. Modify the API to perform multipart uploads and reassemble the pieces on App Engine
-
❏ C. Generate short-lived V4 signed URLs for Cloud Storage and let clients upload directly
-
❏ D. Stand up an SFTP service on Compute Engine and have App Engine poll the VM for new files
Question 12
Which configuration provides HTTPS access to a single Compute Engine VM while Google manages certificate provisioning and renewals?
-
❏ A. TCP Proxy Load Balancer with VM self-managed certificate
-
❏ B. External HTTPS Load Balancer with Google-managed certificate
-
❏ C. Identity-Aware Proxy
-
❏ D. Network Load Balancer
Question 13
Crestline Retail is rolling out continuous integration with Cloud Build for a service hosted in a Git repository on example.com. You need unit tests to run automatically after every push to the main branch and the build must stop if any test fails. How should you configure Cloud Build to accomplish this?
-
❏ A. Schedule unit tests with Cloud Scheduler to run every 30 minutes
-
❏ B. Add a test step to the cloudbuild.yaml that runs the unit test command and causes the build to fail on test errors
-
❏ C. Set up a dedicated private worker pool in Cloud Build to execute tests separately from other builds
-
❏ D. Create a Cloud Function that listens for repository push events and triggers the test workflow
Question 14
Which Google Cloud managed API layer enforces authentication and quotas, provides request analytics in Cloud Monitoring, issues API keys to partners, and works with services on GKE and Compute Engine without changing runtimes?
-
❏ A. API Gateway
-
❏ B. Cloud Endpoints
-
❏ C. Cloud Run
-
❏ D. Apigee X
Question 15
You support a ride sharing platform on Google Cloud where an HTTP-triggered Cloud Function calls an external SMS gateway to send trip completion texts. Following a large referral campaign, some riders report missing texts and your logs show intermittent HTTP 500 errors from the function during spikes in traffic. How should you redesign the system so that outbound messages are durably queued and processed even when there are transient failures in the function or the SMS provider?
-
❏ A. Increase the Cloud Function memory to 3 GB and implement local retries for failed SMS HTTP requests
-
❏ B. Publish every SMS request to a Pub/Sub topic and have a subscriber Cloud Function send messages to the SMS API with retries and a dead-letter topic
-
❏ C. Use Memorystore for Redis as a queue for pending SMS and have the function pop entries and call the provider
-
❏ D. Add exponential backoff in the mobile app and retry the function invocation up to 12 times on failure
Question 16
Which Cloud Run release strategy lets you validate a new revision with a small set of users and then safely roll it out to all users?
-
❏ A. Blue green cutover on Cloud Run
-
❏ B. Canary service with synthetic traffic
-
❏ C. Cloud Run traffic splitting with gradual canary
Question 17
A compliance team at RiverMint Finance needs to archive transaction records for long retention. The system must store about 45 TB of structured records that are exported as CSV and JSON, and auditors will access the files about once or twice per month. Regulations require the data to remain available for nine years at the lowest possible cost. Which Google Cloud service should you choose?
-
❏ A. Cloud Spanner
-
❏ B. BigQuery
-
❏ C. Cloud Storage Nearline class
-
❏ D. Cloud Bigtable
Question 18
How should a Cloud Run service securely and with low latency retrieve its project ID and region at request time?
-
❏ A. Set environment variables manually
-
❏ B. Use the internal metadata server with the required header to fetch project ID and region
-
❏ C. Call Cloud Resource Manager and Cloud Run Admin APIs at startup
-
❏ D. Use Secret Manager to store the values and read at request time
Question 19
At OrchardWorks, a SaaS team is building a backend service that uses Firestore for persistence, and the continuous integration pipeline runs roughly 40 times each day to verify end-to-end reads and writes while avoiding any impact on live environments and costs. What is the most suitable approach to run these integration tests?
-
❏ A. Execute tests against the production Firestore instance and roll back writes afterward
-
❏ B. Provision a temporary Firestore instance in a separate Google Cloud project for each pipeline run
-
❏ C. Run tests against the Firestore emulator in the Local Emulator Suite
-
❏ D. Mock Firestore calls with test doubles in your unit tests
Question 20
On GKE, which approach enables automatic scaling for a stateless Deployment within 14 days using built in metrics and minimal manifest changes?
-
❏ A. Vertical Pod Autoscaler
-
❏ B. Managed Instance Group autoscaler
-
❏ C. Horizontal Pod Autoscaler with average CPU target
-
❏ D. Horizontal Pod Autoscaler with custom metric
Google Cloud Developer Certification Exam Answers

All GCP questions come from my Google Developer Udemy course and certificationexams.pro
Question 1
A fintech startup named AlderPay runs a microservice on Google Kubernetes Engine that must read a database credential that rotates every 45 days. The team wants a straightforward approach that keeps the value encrypted both at rest and when accessed by the workload while minimizing operational effort. What should you do?
-
✓ B. Use Google Cloud Secret Manager and grant the Kubernetes service account access through Workload Identity
The correct option is Use Google Cloud Secret Manager and grant the Kubernetes service account access through Workload Identity.
Secret Manager stores secrets encrypted at rest and serves them over TLS which keeps access secure. With Workload Identity the Kubernetes service account is mapped to a Google service account so the pod can call the Secret Manager API and retrieve the latest version at runtime without mounting files or baking credentials into images. Versioning and IAM let the application automatically read the current secret after a 45 day rotation with no pod restarts, and audit logs help meet compliance while keeping operational effort low.
Encrypt the password using Cloud KMS and store the ciphertext in a ConfigMap then have the container decrypt it at startup is cumbersome for operations because it requires building and maintaining decryption logic in the application and managing key usage. It also stores sensitive data material in a resource not intended for secrets and makes rotation harder since you must update the manifest and ensure every pod reloads the new value.
Create a Kubernetes Secret and expose it to the pod through environment variables is weaker because Kubernetes Secrets are only base64 encoded unless you also configure etcd encryption, environment variables are easy to leak in logs and diagnostics, and rotations typically require restarts to propagate changes. This does not minimize operational work and does not provide versioned retrieval at runtime.
Place the credential in a ConfigMap and reference it from the pod specification is not appropriate because ConfigMaps are for non sensitive configuration, provide no secret specific controls, and do not support secure handling or rotation of confidential data.
When you see GKE plus secrets plus rotation and a desire to minimize operations, think of Secret Manager with Workload Identity. Avoid using ConfigMaps for sensitive data and remember that Kubernetes Secrets need extra steps for strong encryption and rotation.
Question 2
Which approach minimizes risk and enables steady releases when migrating a monolith to microservices on Google Cloud?
-
✓ C. Incrementally replace features with microservices while the monolith coexists during staged releases
The correct option is Incrementally replace features with microservices while the monolith coexists during staged releases. This approach reduces risk because you migrate functionality in small slices while continuing to serve users and you can validate each change with staged rollouts.
This incremental approach aligns with the strangler pattern. You route specific features to new services while the remaining traffic continues to the monolith. You can use staged releases with canary or blue green techniques to verify performance and correctness before expanding traffic. If issues appear you can quickly roll back only the affected slice rather than the whole system.
Because the monolith and the new services run together you preserve stability and maintain delivery cadence. You can iterate on service boundaries and operational practices such as observability and autoscaling as you go. This leads to steady releases and controlled risk during the migration.
Build a new microservices platform and cut over when complete creates a long period without user feedback and concentrates risk at a single cutover event. It delays value and makes rollback complex since there is no gradual transition.
Lift and shift the monolith to GKE without decomposing services can modernize infrastructure but it does not advance the migration to microservices or enable steady feature by feature releases. It leaves coupling and deployment risk largely unchanged.
Big bang refactor into microservices in a single release maximizes risk because many changes land at once. It complicates testing and rollback and it often leads to prolonged instability after release.
When a question asks about minimizing risk and enabling steady releases look for options that describe incremental change where the monolith and new services coexist and traffic is shifted gradually.
Question 3
You oversee a payments API at Nuvexa Labs that runs in a production Google Kubernetes Engine cluster and you release updates with Cloud Deploy. Engineers push frequent small changes each day, and you want a local workflow that automatically rebuilds and redeploys on edits and that uses a lightweight Kubernetes environment on a laptop so it closely mirrors production. Which tools should you choose to build images and run containers locally while keeping resource usage low?
-
✓ B. Minikube and Skaffold
The correct answer is Minikube and Skaffold.
Skaffold provides a developer friendly loop that watches your source files, rebuilds images, and redeploys to Kubernetes automatically when you edit code. It supports rapid local iteration and integrates with Kubernetes manifests or Helm charts so your workflow matches how you deploy to production.
Minikube runs a lightweight single node Kubernetes cluster on a laptop which keeps resource usage low while still giving you real Kubernetes APIs and behavior. Using Skaffold with Minikube closely mirrors a GKE based production environment since you are developing and testing on Kubernetes rather than on a different container orchestrator.
kaniko and Tekton target container builds and CI pipelines in a cluster environment and they are great for build systems but they are not optimized for a fast local inner development loop on a laptop. This pair would add unnecessary complexity for automatic rebuild and redeploy on local edits.
Docker Compose and dockerd run containers with Docker rather than Kubernetes which means the local environment does not match Kubernetes primitives or behavior. This makes it a poor fit when you want a development setup that mirrors GKE.
Terraform and kubeadm focus on provisioning and bootstrapping infrastructure rather than providing a lightweight local Kubernetes cluster with an automatic build and deploy loop. They do not address rapid local iteration on code changes.
When you see a need for a local Kubernetes dev loop with automatic rebuilds and redeploys that should also mirror production, look for Skaffold paired with a local Kubernetes like Minikube or kind and eliminate CI pipeline or provisioning tools.
Question 4
How should you configure a Cloud Bigtable app profile to support automatic failover and efficient routing across two replicated clusters without manual intervention?
-
✓ C. Use multi cluster routing in the app profile
The correct option is Use multi cluster routing in the app profile.
Use multi cluster routing in the app profile enables a Cloud Bigtable client to automatically send requests to any healthy replicated cluster and to fail over without manual changes when a cluster becomes unavailable. With multi cluster routing the client also benefits from locality aware routing which improves latency by using the nearest cluster and it can distribute traffic across clusters to make better use of replication.
Memorystore for Memcached is a separate caching service and it does not control how a Bigtable app profile routes traffic across clusters, so it cannot provide automatic failover for Bigtable.
Enable client side retries with backoff only retries against the same endpoint, so it helps with transient errors but it does not switch clusters or provide cross cluster routing and therefore it does not meet the requirement.
Keep single cluster routing with a second cluster pins requests to one specific cluster, so failover requires manual intervention or profile changes and it does not deliver automatic routing across both clusters.
When a question asks for automatic failover across replicated Bigtable clusters, look for multi cluster routing in the app profile. If the option mentions single cluster routing or client retries, it usually does not satisfy automatic cross cluster failover.
Question 5
You are developing a module at Meadowbrook Apparel that records hourly shift submissions in an internal workforce system and needs to notify several independent payroll, compliance, and analytics services. The design must keep producers and consumers loosely coupled and should allow new processing steps to be added next quarter while handling about 300 submissions per minute. What approach will provide reliable communication with downstream services and keep future additions simple?
-
✓ D. Publish a shift.submitted event to a single Pub/Sub topic and let each downstream team create and manage its own subscription
The correct option is Publish a shift.submitted event to a single Pub/Sub topic and let each downstream team create and manage its own subscription.
This approach implements an event driven fan out pattern that keeps producers and consumers loosely coupled. The producer emits a single event and each downstream service owns its subscription which lets teams scale independently, set their own acknowledgement and retry behavior, and use dead letter handling without coordinating changes with the producer. Adding a new processing step next quarter stays simple because a new subscription can be created without modifying the publisher.
This design easily handles 300 submissions per minute because the service is built for high throughput and provides durable at least once delivery. Independent subscriptions prevent one slow or failing consumer from blocking others and backlogs can accumulate per consumer until it catches up.
Use Cloud Tasks with a queue for each step to push HTTP requests to every downstream system is not a good fit because that service is optimized for targeted task delivery to a specific handler and rate control rather than broadcast to many independent consumers. Managing many queues also couples the producer to every step and increases operational overhead.
Create a separate Pub/Sub topic for every step and have each team subscribe only to its own step increases coupling because the producer must know every step and publish to multiple topics. It complicates future additions since new steps require publisher changes, whereas a single topic with multiple subscriptions keeps the producer unaware of consumers.
Build a microservice on Google Kubernetes Engine that invokes downstream steps in sequence and waits for each response introduces tight coupling and synchronous dependencies. It creates a single point of failure, increases latency, and makes adding new steps require code changes and redeployments instead of simply attaching another subscription.
Watch for language about loosely coupled producers and multiple independent consumers. That usually points to using an event bus pattern with a single topic and many subscriptions so new consumers can be added without changing the publisher.
Question 6
Which GKE configuration enables stateful workloads with durable volumes and ordered rolling updates without downtime?
-
✓ B. StatefulSet with PVC and RollingUpdate
The correct option is StatefulSet with PVC and RollingUpdate.
StatefulSet with PVC and RollingUpdate is designed for stateful workloads that need stable identities and storage that persists across rescheduling. The volumeClaimTemplates in a StatefulSet create a dedicated PersistentVolume for each replica, which provides durable storage in GKE using persistent disks. The RollingUpdate strategy updates pods in a defined ordinal order and waits for readiness between steps, which maintains availability and avoids downtime for properly configured applications.
Deployment with PVC and RollingUpdate is not correct because Deployments do not give each replica a stable identity or an automatic one volume per replica pattern. Multiple replicas cannot reliably share a single ReadWriteOnce volume and you cannot guarantee the same pod to volume binding across updates.
DaemonSet with local SSD and manual upgrades is not correct because DaemonSets target one pod per node for infrastructure style workloads rather than application replicas, and local SSD is ephemeral rather than durable. Manual upgrades do not provide ordered rolling updates that maintain availability.
Deployment with Recreate and persistent disk is not correct because the Recreate strategy stops existing pods before starting new ones, which causes downtime even if the storage is durable.
Watch for cues like stable identity, durable per replica storage, and ordered updates since these point to a StatefulSet with PVC and RollingUpdate. Be cautious when you see Recreate or local SSD because these often imply downtime or non durable storage.
Question 7
SkyTrail Analytics runs a BigQuery warehouse with about 3 PB of customer events. The company is entering the European Union market and must meet two obligations. Users must be able to have their account data erased upon request, and any data belonging to EU users must be stored only in an EU region. What actions should you take to satisfy these obligations? (Choose 2)
-
✓ C. Run BigQuery DML such as DELETE and UPDATE to remove or modify user records on demand
-
✓ E. Create a separate BigQuery dataset in the EU and migrate or write EU user data there
The correct options are Run BigQuery DML such as DELETE and UPDATE to remove or modify user records on demand and Create a separate BigQuery dataset in the EU and migrate or write EU user data there.
Run BigQuery DML such as DELETE and UPDATE to remove or modify user records on demand satisfies the user erasure obligation because you can issue targeted DELETE statements with a predicate that identifies the user and apply changes immediately. Using BigQuery DML lets you handle right to be forgotten requests without exporting or rebuilding tables and it works directly on the warehouse tables.
Create a separate BigQuery dataset in the EU and migrate or write EU user data there satisfies the data residency requirement because BigQuery enforces storage location at the dataset level. Keeping EU user records in a EU dataset ensures that their data is stored only in the EU and that queries and jobs operate within that location.
Store EU user records in an EU Cloud Storage bucket and query them using external tables is not sufficient because external tables keep data in Cloud Storage and BigQuery cannot use DML to delete specific rows in that storage. Meeting erasure requests would require rewriting objects outside of BigQuery table management and this adds complexity and does not provide on demand deletion in the warehouse.
Use BigQuery federated queries to read from a US Cloud Storage bucket so that data is not duplicated violates the EU only storage requirement because EU user data would reside in the US bucket. Avoiding duplication does not outweigh the need to keep EU data stored only in an EU location.
Build a Dataflow pipeline that republishes sanitized snapshots to a US dataset every week fails both needs because weekly snapshots do not provide immediate deletion and storing data in a US dataset violates the EU residency requirement.
Map requirements to native capabilities. For data erasure think on demand row changes with DML. For residency think dataset location and keep regulated data in a separate regional dataset that matches the requirement.
Question 8
Which connection method ensures the most secure private connectivity from GKE to Cloud SQL for PostgreSQL without using the public internet?
-
✓ B. Private IP with Cloud SQL Auth Proxy sidecar
The correct option is Private IP with Cloud SQL Auth Proxy sidecar.
Using a private network address keeps GKE to Cloud SQL traffic entirely on your VPC so it never traverses the public internet. The sidecar enforces IAM based identity, manages ephemeral certificates, and brokers the connection on behalf of the application which reduces credential sprawl and simplifies secure access. This combination provides private routing with strong authentication and automatic certificate rotation which aligns with Google recommendations for production workloads on GKE.
Public IP with authorized networks still exposes the instance on a public address and only limits which client addresses can reach it. Traffic goes over the internet so it does not satisfy the requirement for private connectivity.
Public IP with SSL/TLS encrypts data in transit but the connection still uses a public address and internet routing. The question asks for private connectivity which this option does not provide.
Private IP with direct app connection uses the private path, however the application must manage database credentials and client security on its own. It lacks the identity aware enforcement and automatic certificate management provided by the sidecar which makes it less secure than the recommended approach.
When a question emphasizes most secure and no public internet choose private networking combined with identity aware access such as the Cloud SQL Auth Proxy or connectors. Public IP options do not meet private connectivity needs even if they use authorized networks or TLS.
Question 9
A development team at Northstar Retail is standardizing container builds for services that will run on Cloud Run and Google Kubernetes Engine. To reduce the attack surface and lower the chance of bundling vulnerable components, what type of operating system base image should they select when creating their container images?
-
✓ C. Choose a minimal base image that contains only the libraries that the service requires
The correct option is Choose a minimal base image that contains only the libraries that the service requires.
This choice reduces the attack surface because it removes packages that your service does not need. Fewer components mean fewer potential vulnerabilities and a smaller image that pulls faster and starts more quickly on Cloud Run and Google Kubernetes Engine. This approach aligns with Google guidance to keep images small and to prefer hardened runtimes such as distroless that omit shells and package managers.
Google maintained distroless images are a strong example of this strategy since they include only the language runtime and essential system libraries. They help limit exposure, improve startup performance, and simplify vulnerability management.
Pick a community image that is not validated by a trusted publisher is incorrect because unverified images carry supply chain risk and may include hidden malware or outdated components. Using trusted and maintained images helps ensure provenance and better security posture.
Use a fully loaded image that includes many common libraries and utilities is incorrect because bundling extra tools increases the number of packages and potential vulnerabilities and it also increases image size which can slow deployments and cold starts.
Ubuntu LTS base image is not the best choice for this goal because it is a general purpose distribution that includes many packages your service does not need. While stable and well supported it creates a larger attack surface compared with a minimal or distroless base.
When a question asks how to lower the attack surface of containers, prefer small or minimal base images and think of distroless. Favor trusted publishers over community images and remove tools that are not needed at runtime.
Question 10
How can you build a container from source without a Dockerfile and publish the image to a central Artifact Registry repository in Google Cloud? (Choose 2)
-
✓ B. gcloud run deploy with –source
-
✓ D. pack build with buildpacks
The correct options are gcloud run deploy with –source and pack build with buildpacks.
Deploying from source with Cloud Run uses Cloud Buildpacks to containerize your code without a Dockerfile. The deploy process automatically builds the image and stores it in Artifact Registry in your project. Cloud Run creates or reuses a repository for source based deployments so the built image is published to Artifact Registry which can serve as a central repository for your organization.
Using the pack CLI with Cloud Native Buildpacks also builds a container from source without a Dockerfile. You tag the image with your Artifact Registry repository path and either publish directly with the pack publish option or push with a standard container push command. This results in the image being stored in Artifact Registry as required.
Skaffold build is primarily a local developer workflow tool and typically relies on a Dockerfile unless you create and maintain a specific buildpacks configuration. The question asks for direct ways to build from source without a Dockerfile and publish to a central Artifact Registry which makes this option less appropriate.
Push source code to Artifact Registry is incorrect because Artifact Registry stores built images and packages. It does not accept raw source code as an artifact.
When a question mentions building from source without a Dockerfile think of buildpacks. In Google Cloud that usually means Cloud Run deploying from source or using the pack CLI. Also remember that Artifact Registry stores images not source code.
Question 11
BlueRidge Artworks plans to move its media upload API from a data center to App Engine standard and the service must allow single file uploads up to 750 MB while keeping App Engine instances out of the data path and still applying authorization and a size limit. What should you implement?
-
✓ C. Generate short-lived V4 signed URLs for Cloud Storage and let clients upload directly
The correct option is Generate short-lived V4 signed URLs for Cloud Storage and let clients upload directly.
This approach lets clients stream data straight to Cloud Storage so App Engine instances are not in the data path. Cloud Storage supports very large objects so a 750 MB file is well within limits. You can keep access controlled by issuing short lived credentials so only authorized clients can upload. You can also enforce a maximum size when you issue the upload authorization by using the V4 policy conditions that constrain the allowed content length which satisfies the size limit requirement while the application server only brokers the authorization.
Send file data through Cloud Pub/Sub and have a subscriber write to storage is not suitable because Pub/Sub messages have a small size limit and the service is not designed for bulk binary file transfer, so 750 MB objects would not work.
Modify the API to perform multipart uploads and reassemble the pieces on App Engine would push large data through App Engine which violates the requirement to keep instances out of the data path. App Engine also has request size and execution constraints that make 750 MB uploads impractical.
Stand up an SFTP service on Compute Engine and have App Engine poll the VM for new files adds unnecessary operational complexity and does not provide the simple signed authorization flow or an enforced size limit. It also bypasses Cloud Storage upload best practices.
When you see large file uploads with App Engine or Cloud Run, think about keeping the service out of the data path. Use Cloud Storage signed URLs or signed POST policies to authorize direct uploads and apply constraints like size and content type.
Question 12
Which configuration provides HTTPS access to a single Compute Engine VM while Google manages certificate provisioning and renewals?
-
✓ B. External HTTPS Load Balancer with Google-managed certificate
The correct option is External HTTPS Load Balancer with Google-managed certificate.
This configuration terminates HTTPS at the edge and lets Google provision and renew the TLS certificate automatically. It can direct traffic to a single Compute Engine virtual machine by using a backend service that references an instance group with one instance or a network endpoint group that targets that instance. This gives you a single external IP with HTTPS while offloading certificate lifecycle tasks to Google.
TCP Proxy Load Balancer with VM self-managed certificate is not correct because the certificate is managed on the virtual machine in this option, so Google does not provision or renew it for you. It also operates at the transport layer and does not provide HTTP specific features that are typical for HTTPS websites.
Identity-Aware Proxy is not correct because it is an access control service and not a public facing HTTPS termination point on its own. It does not manage TLS certificates for your custom domain and is typically used together with an HTTPS load balancer when exposing web applications.
Network Load Balancer is not correct because it is a pass through option at the transport layer and it does not terminate TLS or manage certificates.
When you see wording like Google manages certificate provisioning and renewals think of the external HTTPS load balancer with a managed certificate. If an option mentions self managed certificates or a transport layer load balancer then it will not meet that requirement.
Question 13
Crestline Retail is rolling out continuous integration with Cloud Build for a service hosted in a Git repository on example.com. You need unit tests to run automatically after every push to the main branch and the build must stop if any test fails. How should you configure Cloud Build to accomplish this?
-
✓ B. Add a test step to the cloudbuild.yaml that runs the unit test command and causes the build to fail on test errors
The correct option is Add a test step to the cloudbuild.yaml that runs the unit test command and causes the build to fail on test errors.
This approach keeps testing inside the CI pipeline and ensures that a nonzero exit code from the test command fails the step which immediately stops the build. Configure a Cloud Build trigger on the repository to run on pushes to the main branch and point it to cloudbuild.yaml, and the test step in cloudbuild.yaml will enforce that failures halt the pipeline.
Schedule unit tests with Cloud Scheduler to run every 30 minutes is incorrect because it is time based rather than event driven and it would not guarantee tests after every push or stop the associated build on failure.
Set up a dedicated private worker pool in Cloud Build to execute tests separately from other builds is incorrect because worker pools provide isolation and networking control but they do not address running tests on each push or failing a build when tests fail.
Create a Cloud Function that listens for repository push events and triggers the test workflow is incorrect because Cloud Build already provides native repository triggers and adding a function adds unnecessary complexity without improving how test failures stop the build.
When a requirement says run on every push think Cloud Build triggers. When it says stop the build on failures put tests as a build step that returns a nonzero exit code.
Question 14
Which Google Cloud managed API layer enforces authentication and quotas, provides request analytics in Cloud Monitoring, issues API keys to partners, and works with services on GKE and Compute Engine without changing runtimes?
-
✓ B. Cloud Endpoints
The correct option is Cloud Endpoints because it enforces authentication and quotas, provides request analytics in Cloud Monitoring, issues API keys for consumer access, and works with services running on GKE and Compute Engine without requiring runtime changes.
Cloud Endpoints places the ESPv2 proxy in front of your service and integrates with Google Cloud Service Infrastructure to handle authentication, API keys, and quota enforcement. It exports metrics and logs to Cloud Monitoring and Cloud Logging so you can view request analytics without adding custom code. You can deploy the proxy next to workloads on GKE or on Compute Engine which lets you keep your chosen runtime and framework unchanged while still getting centralized API management capabilities.
API Gateway is designed primarily for serverless backends such as Cloud Functions, Cloud Run, and App Engine. It does not directly support Compute Engine or GKE services as backends which means it does not meet the requirement to work with those runtimes without changes.
Cloud Run is a compute platform for running containers. It is not an API management layer that issues API keys, enforces quotas, or provides centralized API analytics for arbitrary backends.
Apigee X is a full lifecycle API management platform and while it can enforce auth and quotas and issue keys, its analytics are provided in the Apigee analytics suite rather than natively in Cloud Monitoring by default. The question points to a lighter managed layer that integrates with Cloud Monitoring and fits workloads on GKE and Compute Engine which aligns with Endpoints.
Look for signals like Cloud Monitoring based analytics and support for GKE and Compute Engine backends. Those usually indicate Cloud Endpoints, while serverless only clues point to API Gateway.
Question 15
You support a ride sharing platform on Google Cloud where an HTTP-triggered Cloud Function calls an external SMS gateway to send trip completion texts. Following a large referral campaign, some riders report missing texts and your logs show intermittent HTTP 500 errors from the function during spikes in traffic. How should you redesign the system so that outbound messages are durably queued and processed even when there are transient failures in the function or the SMS provider?
-
✓ B. Publish every SMS request to a Pub/Sub topic and have a subscriber Cloud Function send messages to the SMS API with retries and a dead-letter topic
The correct answer is Publish every SMS request to a Pub/Sub topic and have a subscriber Cloud Function send messages to the SMS API with retries and a dead-letter topic.
This design decouples message production from delivery and provides a durable queue. Messages are persisted in the topic and are redelivered until acknowledged. A subscriber function processes messages and handles transient errors by letting the message be retried automatically. A dead letter topic safely captures messages that repeatedly fail so operators can inspect and replay them.
This approach absorbs traffic spikes because producers only publish to the topic quickly while workers scale out to process at a sustainable rate. It also contains failures. If the SMS API or the function has a transient error then the message remains in the subscription and is retried with backoff. The dead letter topic prevents infinite retries and provides a clear path for alerting and manual remediation. Ensure the message processing is idempotent so that retried deliveries do not send duplicate SMS.
Increase the Cloud Function memory to 3 GB and implement local retries for failed SMS HTTP requests is not sufficient because more memory does not add durability. Local retries vanish when an instance restarts and they can still lose requests during crashes or timeouts.
Use Memorystore for Redis as a queue for pending SMS and have the function pop entries and call the provider is not appropriate because Memorystore is an in memory cache and is not a managed durable message queue. You would need to build your own acknowledgement, retry, and dead letter handling and you still risk data loss and operational complexity.
Add exponential backoff in the mobile app and retry the function invocation up to 12 times on failure pushes reliability to clients and does not provide durable queuing. If the backend or the SMS provider is unavailable then requests are not persisted and traffic bursts can be amplified by client retries.
When a question asks for a durable queue with retries and dead letters think asynchronous messaging with Pub/Sub and a worker that can retry safely and process idempotently. Local or client retries do not provide durability.

All GCP questions come from my Google Developer Udemy course and certificationexams.pro
Question 16
Which Cloud Run release strategy lets you validate a new revision with a small set of users and then safely roll it out to all users?
-
✓ C. Cloud Run traffic splitting with gradual canary
The correct option is Cloud Run traffic splitting with gradual canary because it lets you first send a small percentage of real production traffic to a new revision to validate it and then incrementally shift all traffic once it proves stable.
With this method you can begin with a small fraction of requests going to the new revision to observe errors, latency and business metrics while most users remain on the stable version. You can raise the percentage in steps until you reach full rollout. If issues appear you can pause or roll back immediately by returning traffic to the stable revision, which makes this approach both safe and controlled.
The Blue green cutover on Cloud Run pattern switches all traffic in a single step which does not allow validating with a small set of users before the full rollout. It is fast for cutovers but it is not a gradual canary.
The Canary service with synthetic traffic uses generated requests rather than real user traffic, so it does not meet the requirement to validate with a small set of users. The question specifically calls for production traffic based validation.
Look for phrases like small set of users and gradual when choosing between deployment strategies. On Cloud Run that usually points to traffic splitting with percentage based canaries rather than a one step cutover or synthetic tests.
Question 17
A compliance team at RiverMint Finance needs to archive transaction records for long retention. The system must store about 45 TB of structured records that are exported as CSV and JSON, and auditors will access the files about once or twice per month. Regulations require the data to remain available for nine years at the lowest possible cost. Which Google Cloud service should you choose?
-
✓ C. Cloud Storage Nearline class
The correct choice is Cloud Storage Nearline class for this requirement.
Cloud Storage Nearline class is optimized for infrequently accessed data and stores objects such as CSV and JSON natively, which matches the format of the exported records. Access roughly once per month fits its intended use pattern and it provides millisecond access when auditors need the files. It offers significantly lower storage cost than Standard, and when access is only occasional the total cost remains low even with retrieval charges. You can also set a bucket retention policy to enforce nine years so the data remains preserved for compliance while the service scales easily to 45 TB and beyond.
Cloud Spanner is a globally distributed relational database for transactional workloads and it is not designed to store files as objects. Using it to hold 45 TB for rare access would be costly and unnecessary since it targets low latency OLTP, not archival retention.
BigQuery is a serverless data warehouse for analytical queries. Although you could load CSV or JSON into tables, that would transform the data and introduce storage and query costs that do not align with simple file archiving. Occasional auditor access to original files is better served by object storage.
Cloud Bigtable is a NoSQL wide column database for very low latency and high throughput access patterns. It does not store raw files like CSV and JSON as objects and would be more expensive and operationally unsuitable for a long term archive accessed only a few times per month.
Anchor your choice on access frequency and data format. Map monthly access to Nearline and quarterly to Coldline and yearly to Archive, then confirm that files belong in object storage rather than databases. For mandated retention, think of bucket retention policies and bucket lock.
Question 18
How should a Cloud Run service securely and with low latency retrieve its project ID and region at request time?
-
✓ B. Use the internal metadata server with the required header to fetch project ID and region
The correct option is Use the internal metadata server with the required header to fetch project ID and region.
This approach uses the metadata server that is available only inside the Cloud Run execution environment and it returns platform metadata with very low latency. You send requests to metadata.google.internal under the computeMetadata v1 path and you must include the required header Metadata-Flavor set to Google. You can read the project ID from project slash project-id and the region from instance slash region. Because calls remain within Google infrastructure and use the built in identity of the service they are both secure and fast at request time.
Set environment variables manually is fragile and can drift since Cloud Run does not automatically populate project ID or region and you would need to remember to update values on every deployment. It does not guarantee accurate values across projects or regions and it does not leverage a trusted runtime source at request time.
Use the internal metadata server with the required header to fetch project ID and region
Call Cloud Resource Manager and Cloud Run Admin APIs at startup adds network overhead and requires additional permissions and token handling. It can increase cold start time and any cached values can become stale which does not meet the need for secure low latency retrieval at request time.
Use Secret Manager to store the values and read at request time is not appropriate because project ID and region are not secrets and reading a secret on each request adds latency and cost. It also creates a bootstrapping issue since determining the secret resource often depends on knowing the project ID first.
When a question asks for runtime identity or platform details inside a service think about the metadata server and remember the required header. Prefer it for low latency and built in security over external APIs or secrets for non secret values.
Question 19
At OrchardWorks, a SaaS team is building a backend service that uses Firestore for persistence, and the continuous integration pipeline runs roughly 40 times each day to verify end-to-end reads and writes while avoiding any impact on live environments and costs. What is the most suitable approach to run these integration tests?
-
✓ C. Run tests against the Firestore emulator in the Local Emulator Suite
The correct option is Run tests against the Firestore emulator in the Local Emulator Suite. This best satisfies the need to perform end to end reads and writes frequently while eliminating any risk to production data and avoiding usage costs.
This approach provides a local and fully isolated Firestore environment that closely simulates production behavior. You can validate reads, writes, security rules, indexes, transactions, and listeners without touching live data. Pipelines can start quickly, seed and reset state deterministically, and run many times per day with predictable performance and no Firestore billing. It is also straightforward to integrate into continuous integration systems and to parallelize runs when needed.
Execute tests against the production Firestore instance and roll back writes afterward is risky and costly. Even with rollbacks there can be side effects such as triggers, caches, quotas, or contention with live traffic. It creates the possibility of data exposure and flakiness and it incurs charges for operations.
Provision a temporary Firestore instance in a separate Google Cloud project for each pipeline run is slow, complex, and expensive for frequent runs. Firestore is a project level resource and creating and tearing down projects and databases many times per day introduces long setup times, quota management, and unnecessary cost. This is heavy operational overhead compared to a local emulator.
Mock Firestore calls with test doubles in your unit tests does not meet the requirement for integration testing of real reads and writes. Mocks bypass the real client libraries, network behavior, indexes, and security rules, so they cannot validate the full end to end path.
Look for keywords like avoid production, frequent CI runs, and cost. When you see these together for Firestore integration tests, prefer the Local Emulator Suite rather than real environments or mocks.
Question 20
On GKE, which approach enables automatic scaling for a stateless Deployment within 14 days using built in metrics and minimal manifest changes?
-
✓ C. Horizontal Pod Autoscaler with average CPU target
The correct option is Horizontal Pod Autoscaler with average CPU target.
On GKE, HPA scales a stateless Deployment by adjusting the number of replicas based on built in CPU utilization metrics from the metrics server. This uses resource metrics that are available by default in GKE, so you only need to add an HPA resource that targets average CPU to enable automatic scaling. The change is minimal and can be rolled out quickly, which fits the requirement to have scaling working within the stated timeframe.
Vertical Pod Autoscaler modifies container resource requests rather than increasing or decreasing the number of pod replicas. That does not meet the goal of horizontally scaling a stateless Deployment and it may also evict pods to apply new recommendations, which is not a minimal change when you need quick scaling.
Managed Instance Group autoscaler operates at the virtual machine instance level in Compute Engine, not at the Kubernetes Deployment level. It does not scale Kubernetes pods and therefore does not fulfill the requirement to automatically scale a Deployment on GKE.
Horizontal Pod Autoscaler with custom metric requires setting up a custom metrics pipeline such as Cloud Monitoring or a Prometheus adapter, publishing metrics, and configuring adapters. That is not built in and involves more configuration than a minimal manifest change, so it is not the fastest path for enabling autoscaling within the given constraints.
Match the scope of scaling to the tool. For a Deployment and built in metrics with minimal changes, choose HPA on CPU. Options that scale nodes or only adjust requests do not satisfy horizontal replica scaling.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.