GCP Google Cloud Developer Certification Sample Questions

All GCP questions come from my Google Developer Udemy course and certificationexams.pro
Free Google Cloud Developer Certification Exam Topics Test
The Google Certified Professional Developer Practice Test validates your ability to design, build, test, and deploy scalable applications on Google Cloud. It focuses on core domains such as application development, CI/CD automation, service integration, API management, and performance optimization. To prepare effectively, begin with the GCP Professional Developer Practice Questions. These questions mirror the tone, logic, and structure of the real certification exam and help you become familiar with Google’s question style and reasoning approach.
You can also explore Real Google Cloud Developer Certification Exam Questions for authentic, scenario-based challenges that simulate real development tasks within Google Cloud environments. For focused study, review GCP Professional Developer Sample Questions covering topics such as API integration, IAM roles, service deployment, and troubleshooting performance bottlenecks.
Google Certified Developer Exam Simulator
Each section of the GCP Certified Professional Developer Questions and Answers collection is designed to teach as well as test. These materials reinforce essential cloud development concepts and provide clear explanations that help you understand why specific responses are correct, preparing you to think like a professional Google Cloud Developer.
For complete readiness, use the Google Certified Developer Exam Simulator and take full-length practice tests. These simulations reproduce the pacing and structure of the actual Google Cloud certification exam so you can manage your time effectively and gain confidence under real test conditions.
If you prefer focused study sessions, try the Google Developer Certification Exam Dump and the Professional GCP Developer Certification Braindump. These organize questions by topic such as service design, deployment strategies, resource management, and monitoring, allowing you to strengthen your knowledge in key areas.
Google Cloud Certification Practice Exams
Working through these Google Developer Certification Exam Questions builds the analytical and practical skills needed to develop, deploy, and maintain cloud applications efficiently. By mastering these exercises, you will be ready to build scalable systems, maintain governance, and deliver solutions that perform reliably on Google Cloud.
Start your preparation today with the GCP Professional Developer Practice Questions. Train using the Google Certified Developer Exam Simulator and measure your progress with full-length practice exams. Prepare to earn your certification and advance your career as a trusted Google Cloud Developer.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Google Cloud Developer Professional Sample Questions
Question 1
At BrightWave Insurance your team is building the backend that powers an IVR for a claims portal. Each caller maps to a distinct IVR session and every session maintains a persistent gRPC connection to the backend. If a connection drops then the IVR establishes a new connection which adds a small delay for the caller. Analysis shows that calls typically last between 2 and 45 minutes and most traffic occurs during business hours with sharp spikes during open enrollment and when major claims policy updates go live. You want to minimize cost, effort, and operational overhead. Where should you deploy the backend service?
-
❏ A. Cloud Functions
-
❏ B. Google Kubernetes Engine Standard mode
-
❏ C. Cloud Run
-
❏ D. App Engine flexible environment
Question 2
In Cloud Spanner, write throughput does not scale after increasing nodes from 4 to 12 and recent inserts show high tail latency. Which schema or key choice most likely causes a write hotspot?
-
❏ A. Multiple secondary indexes on frequently updated tables
-
❏ B. Using LIKE predicates instead of STARTS_WITH in filters
-
❏ C. Monotonic UUIDv1 primary keys that cluster new inserts into one range
-
❏ D. Enabled change streams on high write tables
Question 3
At Larkspur Finance the sign-in service emits audit events and the audit processor stores them. Both components currently run on the same Compute Engine VM. You will move each component into its own managed instance group and use Pub/Sub for event delivery, and peak load is projected to be four times normal during quarter end. How should you configure Pub/Sub topics and subscriptions so the services scale independently and messages are not duplicated?
-
❏ A. Create one Pub/Sub topic and use a push subscription
-
❏ B. Set up a single Pub/Sub topic and create a pull subscription for each audit service instance
-
❏ C. Set up a single Pub/Sub topic and create one pull subscription shared by the audit service deployment
-
❏ D. Use Pub/Sub Lite with one topic per region and a pull subscription for each audit service instance
-
❏ E. Create a Pub/Sub topic for every authentication service instance and configure a push subscription for each topic
Question 4
Which Google Cloud option lets you run a containerized web service that scales to zero for low cost and capture application errors in Error Reporting with minimal setup, and what should the app do to surface errors?
-
❏ A. App Engine flexible and use Cloud Logging
-
❏ B. Cloud Run and write errors to standard error
-
❏ C. Cloud Functions and rely on default logging
Question 5
An analytics platform at example.com is building a microservices workload on Google Cloud that needs to call multiple Google Cloud APIs from many containerized workers. The team expects about 90,000 requests per minute with multi megabyte payloads and wants to keep latency and CPU overhead low. Which approach should they adopt to handle high throughput and large data efficiently?
-
❏ A. Use Apigee X as a proxy between the application and Google Cloud APIs
-
❏ B. Use the gcloud command-line tool to issue the API calls
-
❏ C. Use Google Cloud client libraries that use gRPC
-
❏ D. Use HTTP JSON REST calls for all Google Cloud services
Question 6
Which Google Cloud service runs containerized batch jobs on demand and scales to zero when idle?
-
❏ A. Cloud Batch
-
❏ B. Cloud Run Jobs
-
❏ C. Google Kubernetes Engine
-
❏ D. Cloud Functions
Question 7
Maple Retail has a new build of a containerized service that passed functional tests and is ready to run on Google Kubernetes Engine. Because staging could not mirror full production load, you want to minimize the risk of performance regressions as the release goes live, and the rollout must be automated end to end. What should you do?
-
❏ A. Use a continuous delivery pipeline to deploy a canary on GKE and increase traffic only when Cloud Monitoring metrics remain healthy
-
❏ B. Deploy a blue green release with your pipeline and cut over all traffic after Cloud Monitoring indicates stable performance
-
❏ C. Configure Cloud Load Balancing with weighted backends to gradually shift traffic between two GKE versions and monitor latency and errors in Cloud Monitoring
-
❏ D. Roll out with kubectl using a RollingUpdate strategy and revert with kubectl rollout undo if Cloud Monitoring shows problems
Question 8
How should you make a Compute Engine VM’s web page publicly accessible without login and with trusted HTTPS?
-
❏ A. Configure Cloud Armor to allow direct access
-
❏ B. External HTTP(S) Load Balancer with Google managed certificate
-
❏ C. Static external IP with a self signed certificate
-
❏ D. Identity Aware Proxy with a managed certificate
Question 9
At UrbanCab you are building a dispatch platform on Cloud Run that stores short lived session data in Memorystore for Redis. You created a Serverless VPC Access connector so the service can reach the Redis instance in your VPC. During integration testing requests that read sessions from Redis fail and the logs repeatedly show connection timeout errors. What is the most likely cause of the timeouts?
-
❏ A. The Memorystore for Redis instance is not reachable from the public internet
-
❏ B. The VPC that hosts the Redis instance has an incorrect network route
-
❏ C. The Cloud Run service is using a Serverless VPC Access connector in a different region
-
❏ D. The Serverless VPC Access connector is attached to a different VPC network than the Redis instance
Question 10
Which IAM role lets a developer create update and delete service accounts in a Google Cloud project while following least privilege?
-
❏ A. roles/iam.serviceAccountUser
-
❏ B. roles/resourcemanager.projectIamAdmin
-
❏ C. roles/iam.serviceAccountAdmin
-
❏ D. roles/iam.serviceAccountCreator
Question 11
Your team runs a GKE Autopilot cluster with Istio service mesh enabled. A service named payments-svc cannot call another in-mesh service named ledger-svc over HTTPS on port 8443. Both workloads have sidecars injected and you have verified that requests from payments-svc leave its sidecar. What is the most probable cause of the failure?
-
❏ A. A Kubernetes NetworkPolicy is blocking ingress to ledger-svc on port 8443
-
❏ B. No VirtualService is defined to route HTTPS requests from payments-svc to ledger-svc
-
❏ C. The DestinationRule for ledger-svc is missing or lacks the TLS policy required for HTTPS negotiation
-
❏ D. A ServiceEntry for external HTTPS access to ledger-svc is missing
Question 12
How can you force a single HTTP request to be captured by Cloud Trace even when default sampling would skip it?
-
❏ A. Increase the Cloud Trace sampling rate in the project
-
❏ B. Use the Cloud Trace v2 API to write traces from a helper tool
-
❏ C. Send the X-Cloud-Trace-Context header with sampling enabled
-
❏ D. Enable the OpenTelemetry AlwaysOn sampler
Question 13
LumaStream is launching a podcast discovery app where users sign in and receive episode recommendations tailored to their interests. A listener’s preferences must be durable and available on all devices. The app must also sign out users after 20 minutes of inactivity. In line with Google Cloud best practices, how should the app store the session information and the preference data?
-
❏ A. Keep session data in Memorystore for Memcached and store user preferences in Cloud SQL
-
❏ B. Record session data in BigQuery and save user preferences in Cloud Storage
-
❏ C. Maintain session data in Memorystore for Redis and keep user preferences in Firestore
-
❏ D. Write session data to Firestore and store user preferences on the VM’s local filesystem
Question 14
In Cloud Build, what is the primary benefit of running each build step in its own container?
-
❏ A. Automatic parallel execution
-
❏ B. Consistent and reproducible runtime per step
-
❏ C. Shared cache across steps
-
❏ D. Automatic secrets rotation
Question 15
Mariner Analytics is building a suite of microservices on Google Cloud that will publish REST APIs for about 30 internal teams and external partners. You must protect the APIs with authentication and quotas and you also need metrics and logs that show who is calling them and how they perform. Which Google Cloud product should you use to manage and secure these APIs and to gain insight into their usage?
-
❏ A. Cloud Pub/Sub
-
❏ B. Cloud Endpoints
-
❏ C. Cloud Storage
-
❏ D. Cloud Run
Question 16
Which actions should you take to privately expose a five microservice GKE application behind one internal IP with low latency? (Choose 2)
-
❏ A. Expose each Service with an internal LoadBalancer
-
❏ B. Configure HTTPRoute rules to map hosts and paths to Services
-
❏ C. Use an external HTTP(S) Load Balancer with IP allowlists
-
❏ D. Provision a Gateway using the gke-l7-ilb class
Question 17
BlueOrbit Games is launching a mobile action title where each player has a profile and a session state that changes many times per minute. Players can switch phones during play and expect their progress to be saved and loaded almost instantly. Which Google Cloud storage approach should the team choose to persist profiles and the rapidly evolving game state?
-
❏ A. Place both profiles and game state in Cloud Bigtable to achieve low latency at large scale
-
❏ B. Write game state as binary objects in Cloud Storage and use the player ID as the object name for quick lookups
-
❏ C. Use Firestore to store player profiles and live game state because it supports real time updates and efficient queries
-
❏ D. Keep player profiles in Cloud SQL and maintain game state in Memorystore for Redis so reads are fast
Question 18
An application manages its own users and roles and stores files in a private Cloud Storage bucket. Many users do not have Google identities. How should the app let an authorized user download a specific object while keeping the bucket private?
-
❏ A. Identity Aware Proxy
-
❏ B. Issue short lived signed URLs from the backend after authorization
-
❏ C. VPC Service Controls
Question 19
An accounting startup named Nimbus Ledger is deploying about 12 internal microservices on a single Google Kubernetes Engine cluster. Each microservice needs its own replica count and every service should be reachable from any other using a stable name even as pods are rescheduled or scaled. What is the best way to configure these workloads and service discovery?
-
❏ A. Deploy each microservice as a Deployment and expose it with an Ingress and have other services call the Ingress IP address
-
❏ B. Run each microservice as a Pod and expose it with a Service and have other services call the Service DNS name
-
❏ C. Deploy each microservice as a Deployment and expose it with a ClusterIP Service and have other services call the Service DNS name
-
❏ D. Front each microservice with an internal HTTP load balancer by using network endpoint groups and have other services call the load balancer address
Question 20
Which fully managed Google Cloud service runs Knative-based containers and preserves the same deployment workflow for migration from on-premises?
-
❏ A. Compute Engine
-
❏ B. Cloud Run
-
❏ C. Cloud Run for Anthos
-
❏ D. Google Kubernetes Engine
Google Cloud Developer Certification Same Questions Answered
Question 1
At BrightWave Insurance your team is building the backend that powers an IVR for a claims portal. Each caller maps to a distinct IVR session and every session maintains a persistent gRPC connection to the backend. If a connection drops then the IVR establishes a new connection which adds a small delay for the caller. Analysis shows that calls typically last between 2 and 45 minutes and most traffic occurs during business hours with sharp spikes during open enrollment and when major claims policy updates go live. You want to minimize cost, effort, and operational overhead. Where should you deploy the backend service?
-
✓ C. Cloud Run
The correct option is Cloud Run.
Cloud Run is a fully managed, container-based service that supports HTTP and HTTP over gRPC with HTTP 2 so it can maintain a persistent gRPC connection for the duration of each IVR session. Typical call lengths of 2 to 45 minutes fit within the configurable request timeout limits, and it scales automatically to handle sharp traffic spikes during open enrollment or policy updates while scaling to zero outside business hours to reduce cost. You can also configure minimum instances to keep a few containers warm during peak times to reduce reconnect delays, which helps minimize effort and operational overhead.
Cloud Functions is not a good fit because it is optimized for short lived stateless invocations and HTTP style requests rather than hosting client facing persistent gRPC services. Maintaining long lived bidirectional streams to individual callers is not what functions are designed for, and the platform does not provide the same gRPC server capabilities that Cloud Run offers.
Google Kubernetes Engine Standard mode could host gRPC services and handle long connections, yet it requires managing clusters, node pools, upgrades, and scaling policies. That operational burden and baseline cost run counter to the goal of minimizing cost, effort, and operational overhead, whereas Cloud Run removes most of that management.
App Engine flexible environment can handle longer requests but it keeps instances running which raises baseline cost and it scales more slowly for spiky workloads. It is also less straightforward for modern gRPC serving compared with Cloud Run, so it is not the most cost effective or low effort choice here.
When you see persistent gRPC connections and spiky traffic with a need to scale to zero and low ops, prefer Cloud Run. If the option requires managing clusters or always on instances then it likely increases cost and operational overhead.
Question 2
In Cloud Spanner, write throughput does not scale after increasing nodes from 4 to 12 and recent inserts show high tail latency. Which schema or key choice most likely causes a write hotspot?
-
✓ C. Monotonic UUIDv1 primary keys that cluster new inserts into one range
The correct option is Monotonic UUIDv1 primary keys that cluster new inserts into one range.
This choice creates a classic hotspot in Cloud Spanner because new rows arrive in increasing key order and therefore land at the end of the keyspace on a single or few adjacent key ranges. Spanner partitions data by key ranges and assigns a leader for each range. When inserts all target the same hot end range, one leader and its serving replicas get overloaded. Adding more nodes does not help because the bottleneck stays concentrated on that hot range, which explains flat write throughput and rising tail latency during bursts.
Multiple secondary indexes on frequently updated tables increases write amplification since each write must also update the affected indexes. However the load still distributes across index key ranges, so write capacity scales with additional nodes unless the index key itself is monotonic. This option raises per write cost rather than creating a single range hotspot.
Using LIKE predicates instead of STARTS_WITH in filters affects read query planning and execution. It does not change how data is keyed or partitioned for writes, so it cannot create a write hotspot or prevent write throughput from scaling with more nodes.
Enabled change streams on high write tables adds extra write and storage work to capture changes, yet the feature is designed to scale with the underlying partitions. It can increase write latency modestly but it does not concentrate new inserts into one range, so it does not explain the observed hotspot symptoms.
When write throughput stalls while nodes increase and p95 latency spikes on inserts, suspect a hotspot caused by monotonic keys. Prefer randomized or hash sharded primary keys such as UUIDv4 and confirm distribution with Key Visualizer.
Question 3
At Larkspur Finance the sign-in service emits audit events and the audit processor stores them. Both components currently run on the same Compute Engine VM. You will move each component into its own managed instance group and use Pub/Sub for event delivery, and peak load is projected to be four times normal during quarter end. How should you configure Pub/Sub topics and subscriptions so the services scale independently and messages are not duplicated?
-
✓ C. Set up a single Pub/Sub topic and create one pull subscription shared by the audit service deployment
The correct option is Set up a single Pub/Sub topic and create one pull subscription shared by the audit service deployment.
Using one topic lets the sign-in service publish all audit events without knowing anything about the consumers. A single pull subscription that is shared by all audit processor instances implements the competing consumers pattern, so Pub/Sub distributes messages across the instances and each message is delivered to only one instance of the subscriber. This allows the audit processor managed instance group to scale out during quarter end and scale back later while the publisher scales independently. Pull delivery also gives you flow control and acknowledgement management that help prevent overload and smooth out spikes.
Because there is only one subscription, you avoid fan out duplication across multiple subscriptions while still gaining horizontal scalability. Backlog in the subscription can be used to drive autoscaling for the processor group using Cloud Monitoring metrics, which fits the projected four times peak load.
Create one Pub/Sub topic and use a push subscription is not the best fit. Push requires an externally reachable HTTPS endpoint and adds operational complexity with authentication and retry behavior. It does not integrate as cleanly with instance group autoscaling and flow control as pull, and the option does not describe a load balanced endpoint to safely distribute requests.
Set up a single Pub/Sub topic and create a pull subscription for each audit service instance is wrong because each subscription receives a copy of every message. That would cause every instance to process the same event, which duplicates work rather than sharing it.
Use Pub/Sub Lite with one topic per region and a pull subscription for each audit service instance is not appropriate. Pub/Sub Lite requires capacity provisioning and is designed for cost sensitive and very high throughput scenarios. It also does not remove the duplication problem created by a subscription per instance.
Create a Pub/Sub topic for every authentication service instance and configure a push subscription for each topic is incorrect because it tightly couples publishers to individual consumers and adds significant operational complexity. It undermines decoupling and makes scaling and routing brittle while increasing the risk of duplicate processing.
Remember that one topic with one subscription shared by many consumers spreads work without duplicates. If you see multiple subscriptions or one subscription per instance, think fan out and duplication. Prefer pull for managed instance groups and use backlog metrics to drive autoscaling.
Question 4
Which Google Cloud option lets you run a containerized web service that scales to zero for low cost and capture application errors in Error Reporting with minimal setup, and what should the app do to surface errors?
-
✓ B. Cloud Run and write errors to standard error
The correct option is Cloud Run and write errors to standard error. It meets the need to run a containerized web service with minimal setup, scales to zero when idle to keep cost low, and surfaces errors in Error Reporting when the app writes to the error stream.
This service runs any container image that listens for HTTP traffic and requires very little operational work. It scales up on demand and down to zero when there is no traffic, which is ideal for sporadic workloads and cost control. Anything the container writes to standard output or the error stream is ingested by Cloud Logging, and Error Reporting automatically groups error entries with stack traces. Writing exceptions to the error stream is enough for them to appear in Error Reporting without extra configuration.
App Engine flexible and use Cloud Logging is not the best fit because the flexible environment does not scale to zero and typically keeps at least one instance running, which raises cost during idle periods. While errors can be captured through Cloud Logging and sent to Error Reporting, the scaling behavior and higher operational footprint do not align with the requirements.
Cloud Functions and rely on default logging does scale to zero and forwards logs by default, yet it does not let you deploy an arbitrary containerized web service. It is designed for source based functions rather than running your own container image, so it does not meet the container requirement in the question.
Map the keywords in the scenario to the service that best fits them. If you see containerized and scale to zero then think Cloud Run, and for Error Reporting remember that writing errors to stderr or setting error severity in logs usually requires no extra setup.
Question 5
An analytics platform at example.com is building a microservices workload on Google Cloud that needs to call multiple Google Cloud APIs from many containerized workers. The team expects about 90,000 requests per minute with multi megabyte payloads and wants to keep latency and CPU overhead low. Which approach should they adopt to handle high throughput and large data efficiently?
-
✓ C. Use Google Cloud client libraries that use gRPC
The correct option is Use Google Cloud client libraries that use gRPC.
Use Google Cloud client libraries that use gRPC is the best fit because gRPC uses HTTP2 with multiplexed connections, header compression, and binary Protocol Buffers which together reduce latency and CPU usage for large request and response bodies. Use Google Cloud client libraries that use gRPC also provides efficient streaming for uploading and downloading multi megabyte data and manages connection pooling, retries, and authentication for you, which helps sustain about 90,000 requests per minute with minimal overhead. Many Google Cloud APIs natively expose gRPC endpoints so Use Google Cloud client libraries that use gRPC takes advantage of the fastest available transport.
Use Apigee X as a proxy between the application and Google Cloud APIs is not appropriate here because it introduces an additional proxy and policy processing layer that adds latency and cost. Use Apigee X as a proxy between the application and Google Cloud APIs is designed for API management and external exposure rather than as a low latency path for internal high throughput calls to Google Cloud services.
Use the gcloud command-line tool to issue the API calls is not suitable because it is a command line administration tool rather than a high performance client. Spawning processes and parsing command output for each request would add significant CPU and latency overhead, so Use the gcloud command-line tool to issue the API calls would not scale to the required request rate or payload sizes.
Use HTTP JSON REST calls for all Google Cloud services would work functionally but it is less efficient for this workload. JSON encoding and HTTP1.1 request patterns increase CPU and bandwidth compared to binary protobuf over HTTP2, therefore Use HTTP JSON REST calls for all Google Cloud services would typically have higher latency and overhead for large payloads and very high throughput.
When you see keywords like high throughput, large payloads, and low latency, prefer the official client libraries that use gRPC where available. Be cautious of answers that add management layers or rely on command line tools inside services because those usually increase overhead.
Question 6
Which Google Cloud service runs containerized batch jobs on demand and scales to zero when idle?
-
✓ B. Cloud Run Jobs
The correct option is Cloud Run Jobs because it runs containerized batch jobs on demand and automatically scales to zero when idle.
This service is a fully managed serverless execution environment for one off container workloads. You provide a container image and define how many tasks to run and the platform executes them to completion and then frees the resources. There are no idle costs because it scales to zero between runs and you only pay while the job is executing.
Cloud Batch is not the best fit because it provisions and manages virtual machine based compute for batch workloads. It can run containers yet it is not a serverless platform that simply scales to zero when idle and it is aimed at large scale or specialized compute scenarios.
Google Kubernetes Engine requires a Kubernetes cluster and you continue to pay for the control plane and nodes even when no jobs are running. It can run Kubernetes Jobs but it is not an on demand serverless job runner that scales to zero.
Cloud Functions is an event driven functions service and not a job oriented container runner. Although it can scale to zero it is designed for functions that respond to events or HTTP requests rather than for managing containerized batch executions.
Look for phrases like scale to zero and containerized batch jobs and prefer the serverless job runner. If the prompt stresses VM control or HPC needs then think of a batch service that manages compute directly.
Question 7
Maple Retail has a new build of a containerized service that passed functional tests and is ready to run on Google Kubernetes Engine. Because staging could not mirror full production load, you want to minimize the risk of performance regressions as the release goes live, and the rollout must be automated end to end. What should you do?
-
✓ C. Configure Cloud Load Balancing with weighted backends to gradually shift traffic between two GKE versions and monitor latency and errors in Cloud Monitoring
The correct option is Configure Cloud Load Balancing with weighted backends to gradually shift traffic between two GKE versions and monitor latency and errors in Cloud Monitoring.
This approach lets you start with a small percentage of production traffic to the new version and then increase the share as metrics remain healthy. You can automate the weight changes in your delivery pipeline and gate progression on Cloud Monitoring signals such as latency and error rate. This reduces the risk of performance regressions under real load while keeping the rollout fully automated.
Use a continuous delivery pipeline to deploy a canary on GKE and increase traffic only when Cloud Monitoring metrics remain healthy is too vague to be reliable because a standard GKE Service or basic Ingress does not provide percentage based traffic splitting on its own. Without a defined mechanism such as weighted backends or a mesh that supports traffic splitting, this choice does not guarantee the required automated and controlled ramp up.
Deploy a blue green release with your pipeline and cut over all traffic after Cloud Monitoring indicates stable performance does not minimize performance risk as effectively because it shifts all traffic at once. If a regression appears only under high load, the full cutover increases blast radius rather than limiting exposure with a gradual shift.
Roll out with kubectl using a RollingUpdate strategy and revert with kubectl rollout undo if Cloud Monitoring shows problems is not automated end to end and relies on manual intervention to revert. A basic rolling update also does not provide metric driven gating or precise control over traffic percentages, so it does not meet the requirement to minimize performance risk while automating the rollout.
When a question emphasizes a gradual traffic shift, automated rollout, and metric based gates, prefer solutions that can control traffic percentages at the load balancer or mesh layer rather than generic deployment strategies.
Question 8
How should you make a Compute Engine VM’s web page publicly accessible without login and with trusted HTTPS?
-
✓ B. External HTTP(S) Load Balancer with Google managed certificate
The correct option is External HTTP(S) Load Balancer with Google managed certificate. This fulfills public access without requiring user authentication and provides a browser trusted TLS certificate.
The load balancer fronts your VM at the edge using a global anycast IP and terminates TLS with a certificate that Google provisions and renews automatically. You attach the VM through an instance group or a network endpoint group and allow the required health check and proxy traffic with firewall rules. This gives you scalable and secure HTTPS without adding a login flow.
Configure Cloud Armor to allow direct access is incorrect because Cloud Armor is a policy and web application firewall service that works with an HTTP(S) load balancer. It does not serve traffic or terminate TLS and it cannot expose a VM by itself.
Static external IP with a self signed certificate is incorrect because a self signed certificate is not trusted by browsers and will show warnings. It also leaves certificate issuance and renewal to you and does not provide edge proxy features such as global routing and protection.
Identity Aware Proxy with a managed certificate is incorrect because this adds authentication for users. The question requires public access without login and this service is designed for per user access control rather than open internet access.
When you see without login and trusted HTTPS think of an external HTTPS load balancer with a managed certificate. If the scenario adds user authentication needs then consider Identity Aware Proxy.
Question 9
At UrbanCab you are building a dispatch platform on Cloud Run that stores short lived session data in Memorystore for Redis. You created a Serverless VPC Access connector so the service can reach the Redis instance in your VPC. During integration testing requests that read sessions from Redis fail and the logs repeatedly show connection timeout errors. What is the most likely cause of the timeouts?
-
✓ C. The Cloud Run service is using a Serverless VPC Access connector in a different region
The correct option is The Cloud Run service is using a Serverless VPC Access connector in a different region.
Serverless VPC Access connectors are regional resources and Cloud Run services can only use a connector that is in the same region as the service. When the connector is in a different region the service cannot establish private connectivity to the VPC that hosts Memorystore. The traffic never reaches the Redis private IP and client connections sit idle until they time out. Memorystore for Redis is reachable only by private IP within the VPC and region which further means a cross region connector mismatch results in timeouts that look exactly like what you observed.
The Memorystore for Redis instance is not reachable from the public internet is incorrect because Memorystore is designed to be private. Cloud Run reaches it over private IP through the connector so public reachability is not required or used.
The VPC that hosts the Redis instance has an incorrect network route is unlikely. VPCs have implicit routes for subnets and the connector attaches to a subnet to provide internal egress. Basic internal routing does not usually cause timeouts in this scenario and the more common cause is the regional mismatch.
The Serverless VPC Access connector is attached to a different VPC network than the Redis instance is not the best answer here. While a different VPC would indeed block access, the question states you created the connector so the service can reach the instance in your VPC which implies the intended network. The most common and exam tested pitfall is using a connector in a different region which directly explains the repeated timeouts.
When a serverless service times out reaching a private backend first verify the connector and service are in the same region and the same VPC network. Remember that Serverless VPC Access is a regional resource and cross region use does not work.
Question 10
Which IAM role lets a developer create update and delete service accounts in a Google Cloud project while following least privilege?
-
✓ C. roles/iam.serviceAccountAdmin
The correct option is roles/iam.serviceAccountAdmin because it grants the ability to create, update, and delete service accounts while avoiding broader project wide IAM administration, which aligns with least privilege.
The Service Account Admin role includes permissions to manage the lifecycle of service accounts, such as create, update, list, get, and delete. It focuses on administering service accounts themselves and their policies without granting the ability to broadly manage project IAM bindings or to act as the service account.
roles/iam.serviceAccountUser is incorrect because it allows a principal to act as a service account, which is useful for impersonation and runtime usage, but it does not allow creating, updating, or deleting service accounts.
roles/resourcemanager.projectIamAdmin is incorrect because it grants wide authority to manage IAM policies across the entire project, which exceeds the least privilege needed for service account management.
roles/iam.serviceAccountCreator is incorrect because it only permits creating service accounts and does not include update or delete permissions.
Map the verbs in the scenario to IAM permissions and then choose the narrowest predefined role. If you see create update delete for service accounts, pick the admin role for service accounts. Remember that Service Account User is for impersonation and that Service Account Creator only covers creation.
Question 11
Your team runs a GKE Autopilot cluster with Istio service mesh enabled. A service named payments-svc cannot call another in-mesh service named ledger-svc over HTTPS on port 8443. Both workloads have sidecars injected and you have verified that requests from payments-svc leave its sidecar. What is the most probable cause of the failure?
-
✓ C. The DestinationRule for ledger-svc is missing or lacks the TLS policy required for HTTPS negotiation
The only correct option is The DestinationRule for ledger-svc is missing or lacks the TLS policy required for HTTPS negotiation.
In Istio, when a client calls a service over HTTPS on a nonstandard port, the client sidecar must know to originate TLS to the destination. You configure this behavior with a DestinationRule that sets a TLS mode in the traffic policy. Without that rule, the client sidecar typically sends plaintext application traffic over the mTLS tunnel between sidecars and the server application that expects HTTPS on port 8443 will not complete the handshake. This aligns with the observation that traffic is leaving the payments sidecar, which points to a TLS negotiation issue rather than a routing or reachability problem.
A Kubernetes NetworkPolicy is blocking ingress to ledger-svc on port 8443 is unlikely because the symptom is consistent with failed HTTPS negotiation rather than basic connectivity. NetworkPolicy problems usually present as immediate connection timeouts or refusals and the question emphasizes mesh behavior and TLS.
No VirtualService is defined to route HTTPS requests from payments-svc to ledger-svc is not the cause because VirtualService controls routing rules such as host and path based routing and traffic splitting. A simple in-mesh call to a ClusterIP service does not require a VirtualService for basic connectivity or TLS negotiation.
A ServiceEntry for external HTTPS access to ledger-svc is missing is incorrect because ServiceEntry is used to make services outside the mesh reachable. The ledger service is in the mesh, so a ServiceEntry is not needed.
Map each Istio resource to its role. Use a DestinationRule for client-side TLS settings, a VirtualService for routing logic, and a ServiceEntry for external services. If an in-mesh call over HTTPS fails, check the DestinationRule TLS policy first.
Question 12
How can you force a single HTTP request to be captured by Cloud Trace even when default sampling would skip it?
-
✓ C. Send the X-Cloud-Trace-Context header with sampling enabled
The correct option is Send the X-Cloud-Trace-Context header with sampling enabled.
This works because Cloud Trace honors a client supplied trace context header. When the sampling flag in this header is set to 1 the platform records that specific request even if the default probabilistic sampler would have dropped it. This lets you capture a single request on demand without changing global configuration or code.
Increase the Cloud Trace sampling rate in the project is not correct because it changes the default probability for all requests and does not guarantee that one particular request will be captured. It also affects system wide overhead rather than providing a targeted override.
Use the Cloud Trace v2 API to write traces from a helper tool is not correct because the API lets you programmatically create and write spans but it does not force the platform to sample an incoming HTTP request automatically. You would need instrumentation to emit spans and that is different from capturing one live request through the normal sampling path.
Enable the OpenTelemetry AlwaysOn sampler is not correct because it traces every request in an instrumented service rather than a single one. It also only applies where your application is using OpenTelemetry and does not override Cloud Trace default sampling for a lone request.
When a question asks how to capture a single request think of overriding sampling with the trace context header rather than raising projectwide rates or changing client libraries.
Question 13
LumaStream is launching a podcast discovery app where users sign in and receive episode recommendations tailored to their interests. A listener’s preferences must be durable and available on all devices. The app must also sign out users after 20 minutes of inactivity. In line with Google Cloud best practices, how should the app store the session information and the preference data?
-
✓ C. Maintain session data in Memorystore for Redis and keep user preferences in Firestore
The correct option is Maintain session data in Memorystore for Redis and keep user preferences in Firestore.
Sessions are short lived and need very fast reads and writes with automatic expiration. Memorystore for Redis provides in memory storage with key expiration, so you can set a 20 minute time to live to sign users out after inactivity. It delivers low latency and high throughput which is ideal for session management.
User preferences must be durable and consistently available across devices. Firestore is a managed document database with strong consistency, flexible schemas, and optional multi region replication. It is well suited for storing user profiles and preferences and supports efficient queries and fine grained access controls.
Keep session data in Memorystore for Memcached and store user preferences in Cloud SQL is not the best fit. Memcached does not offer the same durability and managed high availability features that Redis provides for session state, and Redis is the commonly recommended choice for sessions on Google Cloud. While Cloud SQL can store structured data, user preference documents typically fit better in a document database like Firestore which offers flexible schemas and client friendly access patterns.
Record session data in BigQuery and save user preferences in Cloud Storage is incorrect because BigQuery is an analytics warehouse and is not intended for low latency transactional session data. Cloud Storage is object storage and is not a database, so it is not appropriate for frequently updated per user preference records that need querying.
Write session data to Firestore and store user preferences on the VM’s local filesystem is unsuitable because data on a VM local filesystem is not durable or shared across instances and can be lost when the VM is restarted or rescheduled. While Firestore can store many types of data, using it for sessions is less optimal than Redis for the required rapid expiration and throughput.
When you see session with an idle timeout, think in memory stores with TTL such as Redis. When you see durable user profile or preferences, map that to a managed database like Firestore that replicates and scales for global access. Avoid analytics warehouses and object storage for transactional state.
Question 14
In Cloud Build, what is the primary benefit of running each build step in its own container?
-
✓ B. Consistent and reproducible runtime per step
The correct option is Consistent and reproducible runtime per step.
This is correct because Cloud Build executes each step in the specific container image you define, which isolates the tools and dependencies for that step. This isolation gives you the same environment every time and helps ensure deterministic and repeatable builds. Pinning image versions further prevents drift so the behavior of the step stays consistent across runs and across different machines.
Automatic parallel execution is incorrect because steps run sequentially by default. You only get parallel execution when you explicitly configure dependencies with the waitFor setting, so it is not automatic.
Shared cache across steps is incorrect because each step has an isolated container filesystem and there is no automatic sharing of caches. You can configure named volumes or enable build caching to speed up builds, yet that is not the default behavior and it is not the primary benefit of per step containers.
Automatic secrets rotation is incorrect because secrets rotation is handled by systems such as Secret Manager when you configure it. Running steps in containers does not provide rotation by itself.
Match what containers inherently provide to the option wording. Think isolation and reproducibility. If an option suggests features are automatic such as parallelism, caching, or secrets rotation, then verify whether they require explicit configuration.
Question 15
Mariner Analytics is building a suite of microservices on Google Cloud that will publish REST APIs for about 30 internal teams and external partners. You must protect the APIs with authentication and quotas and you also need metrics and logs that show who is calling them and how they perform. Which Google Cloud product should you use to manage and secure these APIs and to gain insight into their usage?
-
✓ B. Cloud Endpoints
The correct option is Cloud Endpoints because it provides API management for REST services with authentication, quota enforcement, and detailed metrics and logs that identify callers and measure performance.
This service secures APIs with API keys and JWT based authentication using Google service accounts or Firebase Auth, and it works with OpenAPI or gRPC. It enforces consumer level quotas through Service Control and surfaces per method and latency metrics in Cloud Monitoring along with full request and error logs in Cloud Logging. These capabilities let you see who is calling the APIs and how they perform so you can manage internal teams and partners reliably.
Cloud Pub/Sub is a messaging service for event ingestion and asynchronous processing, and it does not provide API gateway features such as authentication policies, client quotas, or per consumer analytics.
Cloud Storage is object storage for files and data and it is not an API management product, so it does not offer authentication enforcement, quotas, or API usage insights for your microservices.
Cloud Run is a compute platform for running containers and exposing endpoints, yet it does not include full API management features like centralized authentication for many consumers, per client quotas, or rich API analytics.
When a question asks for authentication, quotas, and metrics and logs for APIs, look for an API management product rather than compute or storage services. Map the requirement to the service that centralizes policy enforcement and observability for many consumers.
Question 16
Which actions should you take to privately expose a five microservice GKE application behind one internal IP with low latency? (Choose 2)
-
✓ B. Configure HTTPRoute rules to map hosts and paths to Services
-
✓ D. Provision a Gateway using the gke-l7-ilb class
The correct options are Configure HTTPRoute rules to map hosts and paths to Services and Provision a Gateway using the gke-l7-ilb class.
Using HTTPRoute lets you map hostnames and URL paths to the appropriate Kubernetes Services which gives you layer seven routing and a single entry point. This allows multiple microservices to be exposed through one listener and keeps the configuration centralized and simple.
Provisioning a Gateway with the gke-l7-ilb class creates a regional internal HTTP(S) load balancer with a single internal virtual IP. This keeps traffic private within your VPC and provides low latency and high throughput while enabling host and path based routing to your Services. Together these two features meet the requirement of one internal IP and provide efficient routing across the five microservices.
Expose each Service with an internal LoadBalancer is incorrect because it would provision a separate internal IP for each Service. That would not meet the requirement to expose all five microservices behind a single internal IP and it would not provide centralized layer seven routing.
Use an external HTTP(S) Load Balancer with IP allowlists is incorrect because the requirement is private exposure behind an internal IP. An external load balancer uses a public external address and allowlists do not make it internal.
When a question asks for one internal IP in front of multiple GKE Services, think Gateway API with gke-l7-ilb and HTTPRoute for host and path routing. If the requirement says private or internal you can usually eliminate external load balancers even with allowlists.
Question 17
BlueOrbit Games is launching a mobile action title where each player has a profile and a session state that changes many times per minute. Players can switch phones during play and expect their progress to be saved and loaded almost instantly. Which Google Cloud storage approach should the team choose to persist profiles and the rapidly evolving game state?
-
✓ C. Use Firestore to store player profiles and live game state because it supports real time updates and efficient queries
The correct option is Use Firestore to store player profiles and live game state because it supports real time updates and efficient queries.
This choice fits a mobile game that needs near instant synchronization across devices. It provides client SDKs that deliver real time listeners so a player can switch phones and immediately see the latest state without building a custom sync layer. It offers low latency reads and writes for frequently changing documents and it automatically indexes fields for efficient lookups of profiles and session data.
It also handles scalability and durability needs for a global game. Data is strongly consistent and replicated, and the service manages indexing and scaling so the team can focus on gameplay rather than database operations.
Place both profiles and game state in Cloud Bigtable to achieve low latency at large scale is not a great fit for this workload. While it provides low latency at scale, it is optimized for wide column time series and analytical access patterns and it does not provide built in real time client listeners or the rich querying and mobile developer ergonomics needed for rapid per player state changes and instant cross device synchronization.
Write game state as binary objects in Cloud Storage and use the player ID as the object name for quick lookups is unsuitable because object storage is designed for large blobs and not for high frequency small updates. Each change would require rewriting whole objects, latency would be higher, and there is no query capability or real time notification to clients.
Keep player profiles in Cloud SQL and maintain game state in Memorystore for Redis so reads are fast introduces durability and scalability risks. Redis is an in memory cache and is not a primary system of record for critical state that must survive restarts and failovers. Coordinating cache and database consistency is complex, and Cloud SQL may become a bottleneck with many mobile clients and does not provide real time client sync without additional infrastructure.
Map the workload to the data model and access pattern. Choose Firestore for mobile apps that need real time sync and offline capable clients. Use Cloud Storage for large blobs, Memorystore for a cache not a source of truth, and Bigtable for very large scale key based and time series access.
Question 18
An application manages its own users and roles and stores files in a private Cloud Storage bucket. Many users do not have Google identities. How should the app let an authorized user download a specific object while keeping the bucket private?
-
✓ B. Issue short lived signed URLs from the backend after authorization
The correct option is Issue short lived signed URLs from the backend after authorization.
This approach lets the application keep the bucket private while allowing a specific user to download only the authorized object. The app verifies the user in its own identity and authorization system and then the backend generates a time bound URL that is scoped to that single object. The user does not need a Google account and the link expires quickly which limits exposure.
Identity Aware Proxy protects access to web applications and service endpoints that are fronted by Google identity. It does not provide direct time limited access to Cloud Storage objects for users without Google accounts which makes it a poor fit for this requirement.
VPC Service Controls creates a data perimeter to reduce data exfiltration within Google Cloud. It does not enable ad hoc downloads for external users and would restrict external access rather than provide controlled one time access to a specific object.
When users have no Google identities think of time bound access using short lived links that are object scoped. Authenticate and authorize in your app then generate the temporary access only after approval.
Question 19
An accounting startup named Nimbus Ledger is deploying about 12 internal microservices on a single Google Kubernetes Engine cluster. Each microservice needs its own replica count and every service should be reachable from any other using a stable name even as pods are rescheduled or scaled. What is the best way to configure these workloads and service discovery?
-
✓ C. Deploy each microservice as a Deployment and expose it with a ClusterIP Service and have other services call the Service DNS name
The correct option is Deploy each microservice as a Deployment and expose it with a ClusterIP Service and have other services call the Service DNS name.
This approach uses Deployments to manage the desired replica count, rolling updates, and automatic rescheduling which meets the requirement for independent scaling and reliability. A ClusterIP Service gives each microservice a stable virtual IP and a DNS name that is resolvable inside the cluster. Other services can reliably reach peers by calling the Service DNS name even as pods are rescheduled or scaled because the Service abstracts over the changing pod endpoints.
Deploy each microservice as a Deployment and expose it with an Ingress and have other services call the Ingress IP address is not appropriate for internal service to service communication because Ingress is designed for HTTP routing into the cluster. Forcing internal calls through an Ingress and its external virtual IP adds unnecessary latency and complexity and does not provide simple in cluster discovery.
Run each microservice as a Pod and expose it with a Service and have other services call the Service DNS name does not meet the manageability and scaling needs because a single Pod is not a controller managed workload. It does not handle replicas, rollouts, or automatic rescheduling, whereas Deployments do.
Front each microservice with an internal HTTP load balancer by using network endpoint groups and have other services call the load balancer address adds external load balancing components that are unnecessary for communication inside one cluster. This increases cost and operational overhead and does not improve service discovery beyond what a ClusterIP Service and Kubernetes DNS already provide.
When you see internal service to service communication inside a single cluster, think Deployment for pod management and ClusterIP Service with DNS for stable names.
Question 20
Which fully managed Google Cloud service runs Knative-based containers and preserves the same deployment workflow for migration from on-premises?
-
✓ B. Cloud Run
The correct option is Cloud Run because it is a fully managed service that runs Knative-based containers and preserves the same deployment workflow when migrating from on-premises Knative environments.
This service is built on Knative so the container and request model are the same across environments. It supports a consistent developer experience for building a container and deploying it with the same commands and configuration patterns, which makes moves from on-premises Knative or other Knative platforms straightforward.
Compute Engine is infrastructure as a service that provides virtual machines and it does not run Knative by default and does not offer a fully managed serverless experience or preserve the Knative deployment workflow.
Cloud Run for Anthos runs Knative on your own GKE clusters so it is not fully managed and requires you to operate the underlying cluster. It has also been deprecated and retired by Google which makes it less likely to appear as the right choice on newer exams.
Google Kubernetes Engine is a managed Kubernetes service rather than a fully managed serverless platform and it does not provide Knative by default so it does not preserve the same simple Knative deployment workflow without additional setup.
Look for the combination of fully managed and Knative. That pairing points to Cloud Run. If the option mentions Anthos or managing clusters then it is not the fully managed serverless choice.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.