Associate Google Cloud Engineer Sample Questions

Free GCP Certification Exam Topics Tests

The Google Associate Cloud Engineer certification exam validates your ability to configure, deploy, and operate Google Cloud solutions that support scalable project work and team collaboration.

Early in your prep, review GCP Associate Cloud Engineer Practice Questions and Real Associate Cloud Engineer Certification Exam Questions to align your study with the exam’s tone, logic, and structure.

The exam focuses on core domains such as resource provisioning, IAM configuration, VPC networking, Kubernetes and workload management, and monitoring with Cloud Logging and Cloud Monitoring.

Google Cloud Certification Practice Exams

For targeted practice, work through GCP Cloud Engineer Associate Sample Questions and verify your progress with a realistic Google Associate Cloud Engineer Exam Simulator.

Each section of the GCP Associate Cloud Engineer Questions and Answers collection is designed to teach as well as test. Clear explanations reinforce key concepts like service account design, permission boundaries, safe network changes, and efficient troubleshooting so you understand why a choice is correct.

Build full-exam endurance with the Google Cloud Engineer Associate Practice Test series to mirror the pacing of the official certification and to improve time management.

If you prefer topic bursts, you can review an Google Associate Cloud Engineer Exam Dump style set for repetition, while remembering that the goal is learning rather than shortcutting.

About GCP Exam Dumps

Avoid any unethical sources and instead rely on the instructor-created Google Cloud Engineer Associate Certification Braindump style study sets that emphasize understanding over memorization.

Working through these Associate Cloud Engineer practice materials builds the analytical and practical skills needed to manage projects on Google Cloud with confidence.

Start with practice questions, measure progress with the exam simulator, and finish strong with complete practice tests to prepare for certification success.

Google Cloud Engineer Associate Sample Questions

Question 1

On Google Cloud, you must grant least privilege access to BigQuery analysts, Compute Engine admins, and Cloud Storage readers while keeping management overhead low. Which IAM approach should you choose?

  • ❏ A. Project level Editor role for all users in one group

  • ❏ B. Share one service account across teams with broad permissions

  • ❏ C. Use Google Groups with predefined roles scoped to datasets, instances, and buckets

  • ❏ D. A single custom role with BigQuery, Compute Engine, and Cloud Storage permissions

Question 2

Which Google Cloud service should be used to orchestrate approximately 25 containerized microservices that require reliable service to service communication, regional high availability across three zones, and the ability to scale to 500 instances within five minutes?

  • ❏ A. Cloud Run

  • ❏ B. Compute Engine

  • ❏ C. Google Kubernetes Engine

  • ❏ D. Cloud Functions

Question 3

You own the example.com domain and need public regional hostnames such as us.api.example.com and eu.api.example.com to resolve to their respective regional service IP addresses. What should you configure in Google Cloud?

  • ❏ A. Private Cloud DNS zone with VPC DNS

  • ❏ B. Public Cloud DNS zone with A records for each regional static IP

  • ❏ C. External HTTPS Load Balancer with host rules

  • ❏ D. Enable Cloud CDN to serve the DNS names

Question 4

What is the simplest secure way to grant temporary read access for two hours to a private Cloud Storage object for users who do not have Google accounts?

  • ❏ A. Create and share an HMAC key for the bucket then revoke it after two hours

  • ❏ B. Generate a Cloud Storage signed URL that expires in two hours

  • ❏ C. Add an IAM binding with a time-based condition for the reviewers

  • ❏ D. Grant allUsers read access to the object and add a lifecycle rule to remove it after two hours

Question 5

Which Google Cloud load balancer supports global HTTP and HTTPS traffic management with URL path based routing to GKE services across three regions?

  • ❏ A. Internal HTTP(S) Load Balancer

  • ❏ B. External HTTP(S) Load Balancer

  • ❏ C. External TCP/UDP Network Load Balancer

  • ❏ D. External SSL Proxy Load Balancer

Question 6

How can you centralize billing governance for all existing Google Cloud projects under one company-owned Cloud Billing account?

  • ❏ A. Create a new Cloud Billing account and relink each project

  • ❏ B. Use Resource Manager to move projects into your Organization

  • ❏ C. Apply labels and use cost allocation reports

Question 7

Which Google Cloud compute option provides the lowest cost for fault tolerant batch jobs scheduled to run during a five hour nightly window?

  • ❏ A. Compute Engine scheduled start and stop

  • ❏ B. Managed instance group on Spot VMs

  • ❏ C. Cloud Batch on standard VMs

Question 8

What is the correct way to migrate an existing App Engine application to a region closer to your users to reduce latency?

  • ❏ A. Submit a request to Google Cloud Support

  • ❏ B. Create a new project, initialize App Engine in us-central1, then redeploy

  • ❏ C. Create a new App Engine app in the same project in us-central1

Question 9

In GKE, which approach ensures that pod data persists across restarts and node rescheduling while scaling a stateful workload to six replicas?

  • ❏ A. Filestore CSI with a shared NFS volume

  • ❏ B. Local SSDs mounted into pods

  • ❏ C. StatefulSet with PVC template on Persistent Disk

  • ❏ D. DaemonSet with hostPath volumes

Question 10

Which native Google Cloud feature can automatically scale Cloud Spanner capacity based on CPU utilization?

  • ❏ A. Cloud Scheduler script to set Spanner capacity on a fixed schedule

  • ❏ B. Cloud Monitoring alert with Cloud Run webhook to adjust Spanner processing units

  • ❏ C. Enable Compute Engine autoscaler on the Spanner instance

Question 11

In Cloud Storage, objects are frequently read for the first 30 days, then rarely accessed for the remainder of the first year, and must be retained for a total of three years. Which Object Lifecycle Management policy minimizes cost while meeting these requirements?

  • ❏ A. Enable Autoclass with a three year retention policy

  • ❏ B. Keep objects in Standard for 30 days then move to Coldline until the object age is one year and finally transition to Archive for the next two years

  • ❏ C. Keep objects in Standard for 30 days then move directly to Archive for three years

Question 12

In Google Kubernetes Engine, a frontend needs a stable in cluster endpoint to connect to backend pods that scale from 6 to 24 replicas even as pods are rescheduled or restarted. What should you configure?

  • ❏ A. GKE Ingress

  • ❏ B. ClusterIP Service

  • ❏ C. NodePort Service

  • ❏ D. Headless Service

Question 13

Which Google Cloud service provides a fully managed cron scheduler to run jobs every 45 minutes and at 01:00 UTC each day?

  • ❏ A. Cloud Tasks

  • ❏ B. Cloud Scheduler

  • ❏ C. Workflows

  • ❏ D. Cloud Functions

Question 14

Which Cloud Storage feature can automatically retain objects for at least 5 years, move objects older than 5 but less than 10 years to Archive Storage, and delete objects older than 10 years?

  • ❏ A. Autoclass

  • ❏ B. Cloud Storage Lifecycle policies

  • ❏ C. Bucket Lock with retention policy

Question 15

An organization needs an automated cost summary that runs every 10 days, groups costs by project, and produces a report that can be shared. What should they implement?

  • ❏ A. Cloud Monitoring dashboards

  • ❏ B. Cloud Billing budget with Pub/Sub and Cloud Functions to BigQuery

  • ❏ C. Export Cloud Billing to BigQuery then schedule aggregations and use Looker Studio auto refresh

  • ❏ D. Cloud Billing reports in the console

Question 16

Which Cloud Storage configuration minimizes cost for PDF files that are frequently accessed during the first 30 days and rarely accessed afterward while maintaining low latency for occasional reads?

  • ❏ A. Enable Autoclass on a bucket

  • ❏ B. Standard with lifecycle to Nearline after 30 days

  • ❏ C. Multi-region then transition to Archive after 30 days

Question 17

In Compute Engine, what is the correct order of steps to create a managed instance group with autoscaling using an instance template?

  • ❏ A. Create an instance template and enable autoscaling on the template, then create a managed instance group

  • ❏ B. Create an instance template, create a managed instance group from the template, then configure autoscaling on the group

  • ❏ C. Create a regional managed instance group directly from a custom image and enable autoscaling

Question 18

A Compute Engine subnetwork has exhausted its available internal IP addresses. What is the least disruptive way to add capacity and create a new VM?

  • ❏ A. Create a new subnetwork in the same VPC and place the VM there

  • ❏ B. Expand the subnet’s primary CIDR using subnet expansion

  • ❏ C. Add a secondary IP range and use alias IPs for the VM

  • ❏ D. Recreate the subnetwork with a larger CIDR and migrate workloads

Question 19

How can you prevent the accidental deletion of a specific Compute Engine virtual machine?

  • ❏ A. Add to a managed instance group with autohealing

  • ❏ B. Enable VM deletion protection

  • ❏ C. Create a custom IAM role without compute.instances.delete

  • ❏ D. Set a project IAM deny for compute.instances.delete

Question 20

Which Google Cloud service provides synchronous replication across multiple regions with strong consistency and automatic regional failover for a transactional database?

  • ❏ A. Compute Engine snapshots

  • ❏ B. Cloud SQL cross region replica

  • ❏ C. Cloud Spanner multi region

  • ❏ D. Bigtable multi cluster replication

GCP Certified Associate Cloud Engineer Exam Answers

Question 1

On Google Cloud, you must grant least privilege access to BigQuery analysts, Compute Engine admins, and Cloud Storage readers while keeping management overhead low. Which IAM approach should you choose?

  • ✓ C. Use Google Groups with predefined roles scoped to datasets, instances, and buckets

The correct option is Use Google Groups with predefined roles scoped to datasets, instances, and buckets. This strategy delivers least privilege with low management effort because you assign tightly scoped roles to groups at the resource level so each persona has only the permissions required.

Grant roles at the dataset level for BigQuery, at the instance or project level for Compute Engine when appropriate, and at the bucket level for Cloud Storage. This keeps access localized to the resources the teams manage and preserves a strong separation of duties. Managing access through groups means you change membership in one place rather than editing many IAM bindings across projects and resources.

For example, BigQuery analysts can receive read and job submission capabilities on only the datasets they query. Compute administrators can be granted administrative permissions only on the instances or environments they support. Storage readers can be limited to viewing objects in specific buckets. This yields least privilege while keeping ongoing administration simple.

Project level Editor role for all users in one group is overly broad because Editor is a primitive role that grants wide permissions across many services at the project. This violates least privilege and does not respect the different duties of analysts, administrators, and readers.

Share one service account across teams with broad permissions reduces accountability and increases risk because it concentrates access in a single identity and often leads to excessive permissions. It complicates auditing and key management and does not align with role based access for human users.

A single custom role with BigQuery, Compute Engine, and Cloud Storage permissions bundles unrelated permissions and undermines separation of duties. It becomes hard to scope correctly and tends to grow over time. It also increases maintenance because any change requires role updates rather than simple membership changes in a group.

Start by identifying personas and map each to predefined roles at the narrowest resource that meets the need. Prefer granting access to groups so you can change who does the work without rewriting IAM policies.

Question 2

Which Google Cloud service should be used to orchestrate approximately 25 containerized microservices that require reliable service to service communication, regional high availability across three zones, and the ability to scale to 500 instances within five minutes?

  • ✓ C. Google Kubernetes Engine

The correct option is Google Kubernetes Engine.

This service is designed to orchestrate many containerized microservices with mature service discovery and load balancing for reliable service to service communication. You can run a regional cluster that spans three zones to achieve high availability and resilience to zonal failures. Horizontal Pod Autoscaling together with the Cluster Autoscaler can expand capacity quickly so the platform can reach hundreds of replicas in a short time when node pools are sized and configured appropriately.

Cloud Run runs containerized services without managing servers and is regional across multiple zones and can autoscale quickly, yet it is request driven and does not provide full cluster level orchestration or the granular networking and scheduling controls that complex multi service systems often require.

Compute Engine offers virtual machines rather than a managed container orchestrator. You would need to build and operate your own orchestration layer to coordinate 25 services and to achieve cross zone high availability and rapid automated scaling, which makes it a weaker fit.

Cloud Functions targets event driven, single purpose functions and not long running containerized microservices. It does not provide multi service orchestration or reliable service to service networking needed for this scenario.

Map requirements to platform signals. When you see many microservices, explicit service to service communication, and regional high availability across zones, think Kubernetes. If the question stresses orchestration of containers rather than only running them, prefer GKE over serverless or VM options.

Question 3

You own the example.com domain and need public regional hostnames such as us.api.example.com and eu.api.example.com to resolve to their respective regional service IP addresses. What should you configure in Google Cloud?

  • ✓ B. Public Cloud DNS zone with A records for each regional static IP

The correct option is Public Cloud DNS zone with A records for each regional static IP. This provides authoritative public DNS for example.com so that us.api.example.com and eu.api.example.com resolve to their respective regional service IP addresses.

With this configuration you create A records for the us.api and eu.api hostnames that point to the corresponding regional static external IPs you reserved for each service. Because it is a public zone the records are visible to internet clients and they will resolve to the correct regional endpoints.

Private Cloud DNS zone with VPC DNS is only resolvable from within the associated VPC networks and any authorized networks, so it does not satisfy a requirement for public hostnames that must resolve on the internet.

External HTTPS Load Balancer with host rules routes HTTP and HTTPS traffic after a single virtual IP is reached and it does not replace authoritative DNS for your domain. You would still need public DNS records that point hostnames to an address and a global HTTP load balancer would not give you distinct regional public IPs for each hostname.

Enable Cloud CDN to serve the DNS names is not valid because Cloud CDN accelerates content behind HTTP and HTTPS load balancers or Cloud Storage and it is not a DNS service. It cannot host or resolve your domain’s records.

When the requirement is public name resolution for internet clients, think of a public Cloud DNS zone with the right record types and remember that private zones are only for internal VPC resolution. Load balancers and CDN handle traffic and caching, not authoritative DNS.

Question 4

What is the simplest secure way to grant temporary read access for two hours to a private Cloud Storage object for users who do not have Google accounts?

  • ✓ B. Generate a Cloud Storage signed URL that expires in two hours

The correct option is Generate a Cloud Storage signed URL that expires in two hours.

This method gives time-limited access to a single object using a cryptographically signed request. It works for users without Google accounts because anyone who possesses the URL can read the object until the expiration time, and you can set that time to two hours. It is straightforward to create and requires no changes to IAM bindings or bucket policies, which keeps the approach simple and secure.

Create and share an HMAC key for the bucket then revoke it after two hours is incorrect because distributing a long-lived secret key to multiple users is insecure and hard to control. HMAC keys grant broad API access rather than a single object read, and revocation timing is manual and imprecise, which is not the simplest secure option.

Add an IAM binding with a time-based condition for the reviewers is incorrect because IAM conditions still require identifiable principals. Users without Google accounts cannot be targeted with an IAM binding, and the setup is more complex than necessary for short-lived, anonymous access.

Grant allUsers read access to the object and add a lifecycle rule to remove it after two hours is incorrect because it makes the object publicly accessible to everyone on the internet. Lifecycle rules manage object retention and deletion, not access revocation, and deleting the object after two hours is destructive and unnecessary.

When access must be temporary and the users do not have Google identities, think signed URL. Use IAM bindings for identified principals and remember that lifecycle rules manage data retention and deletion rather than access control.

Question 5

Which Google Cloud load balancer supports global HTTP and HTTPS traffic management with URL path based routing to GKE services across three regions?

  • ✓ B. External HTTP(S) Load Balancer

External HTTP(S) Load Balancer is correct because it provides global HTTP and HTTPS traffic management and supports URL path based routing to backends such as GKE services that are deployed across multiple regions.

It offers a single anycast IP for worldwide access and uses URL maps for host and path based routing. With GKE you can use multi cluster Ingress to front services in different regions while using this load balancer to distribute traffic globally and match on URL paths.

Internal HTTP(S) Load Balancer is regional and serves traffic on internal IP addresses only. It can do path based routing but it does not provide a global anycast entry point for internet clients and it does not span regions for external traffic.

External TCP/UDP Network Load Balancer operates at the transport layer and is regional. It does not understand HTTP and therefore cannot perform URL path based routing.

External SSL Proxy Load Balancer is for non HTTP encrypted TCP traffic and does not provide HTTP layer features like URL path based routing. It is not intended for routing to HTTP based GKE services by URL path.

Link the requirement for global scope and URL path based routing to the external HTTP and HTTPS load balancer. If you see internal only or layer four only features then eliminate those options quickly.

Question 6

How can you centralize billing governance for all existing Google Cloud projects under one company-owned Cloud Billing account?

  • ✓ B. Use Resource Manager to move projects into your Organization

The correct option is Use Resource Manager to move projects into your Organization.

Moving existing projects into your Organization with Resource Manager places them under a single hierarchy that your company controls. This enables centralized billing governance because you can manage billing account roles at the organization or folder level and apply organization policies that restrict which billing accounts projects may use. Once projects are in the Organization you can link them to the company owned Cloud Billing account and consistently enforce budgets and alerts across all teams.

Create a new Cloud Billing account and relink each project is insufficient because it focuses on consolidation of spend rather than governance. It does not bring projects under the company hierarchy or let you enforce which billing account must be used, and it relies on manual project by project changes.

Apply labels and use cost allocation reports improves visibility and chargeback but it does not provide control. Labels and reports cannot enforce billing usage or ownership and they do not move projects under the Organization, so they do not meet the goal of centralized governance.

When a question emphasizes governance or control prefer Organization level solutions such as Resource Manager and organization policy rather than project level tools like labels or reports.

Question 7

Which Google Cloud compute option provides the lowest cost for fault tolerant batch jobs scheduled to run during a five hour nightly window?

  • ✓ B. Managed instance group on Spot VMs

The correct option is Managed instance group on Spot VMs because Spot capacity provides the steepest discount and the managed group automatically recreates interrupted instances, which suits a five hour nightly window for fault tolerant batch processing.

A managed instance group gives autoscaling, health checks, and automatic replacement of preempted instances so your batch workload can make progress even when interruptions occur. Using Spot VMs delivers significant cost savings compared to on demand pricing, and the group can spread across multiple zones to improve the chance of capacity. You can start the job on schedule with Cloud Scheduler or a workflow, let the managed instance group scale during the window, and design the batch tasks to checkpoint or retry when a Spot VM is reclaimed.

Compute Engine scheduled start and stop only controls when standard instances run. It does not use discounted capacity and you still pay full on demand prices while the job runs, so it is not the lowest cost choice for a fault tolerant batch workload.

Cloud Batch on standard VMs provides convenient orchestration for batch, yet standard VMs are billed at on demand rates, which costs more than using discounted Spot capacity. Cloud Batch can target Spot capacity, but this option explicitly uses standard VMs and therefore is not the lowest cost.

Watch for keywords like lowest cost, fault tolerant, and a fixed window. Map these to Spot VMs with a resilient controller such as a managed instance group or a batch service configured to use Spot, rather than standard VMs or pure scheduling features.

Question 8

What is the correct way to migrate an existing App Engine application to a region closer to your users to reduce latency?

  • ✓ B. Create a new project, initialize App Engine in us-central1, then redeploy

The correct option is Create a new project, initialize App Engine in us-central1, then redeploy. This is the only way to change the location of an App Engine application because the region is set when the app is first created and it cannot be changed afterward.

App Engine ties the application location to the project at creation time. The region is immutable and each project can contain only one App Engine application. To move closer to users you must create a fresh project in the desired region, initialize App Engine there, and redeploy your services and data. You can then update DNS or routing to point traffic to the new deployment.

Submit a request to Google Cloud Support is incorrect because support cannot migrate or change an App Engine application’s region. The location is a permanent property and Google Support does not override that constraint.

Create a new App Engine app in the same project in us-central1 is incorrect because a project can have only one App Engine application and its region cannot be changed or recreated within the same project.

When you see App Engine and regions, remember the region is immutable once set and there is one App Engine app per project. If the question asks how to move regions, look for an answer that creates a new project and redeploys.

Question 9

In GKE, which approach ensures that pod data persists across restarts and node rescheduling while scaling a stateful workload to six replicas?

  • ✓ C. StatefulSet with PVC template on Persistent Disk

The correct option is StatefulSet with PVC template on Persistent Disk.

This design gives each replica a stable identity and its own PersistentVolumeClaim that dynamically provisions a Persistent Disk. The disk is independent of the pod lifecycle so data survives pod restarts and the volume can be detached and reattached to another node during rescheduling in the same zone. When you scale to six replicas the volumeClaimTemplates create six distinct PVCs and six Persistent Disks so each replica keeps its own data without interference from others.

The option Filestore CSI with a shared NFS volume shares one network file system across replicas. While the data would persist, the workload would not have per replica isolation and many stateful systems require exclusive block storage with ReadWriteOnce semantics rather than a shared ReadWriteMany volume. This approach also does not give the stable one volume per pod mapping that StatefulSets provide.

The option Local SSDs mounted into pods uses ephemeral node local storage. Data is lost when a node is recreated or when a pod is rescheduled to a different node, so it does not meet the persistence requirement.

The option DaemonSet with hostPath volumes ties data to the node filesystem and schedules one pod per node rather than six controlled replicas. If a pod moves to another node the data will not follow, so it cannot ensure persistence across rescheduling.

When you see words like stateful, per replica, and persists across restarts and rescheduling, map them to a StatefulSet with volumeClaimTemplates and Persistent Disks. Be cautious with shared RWX solutions and node local storage because they often fail per replica persistence requirements.

Question 10

Which native Google Cloud feature can automatically scale Cloud Spanner capacity based on CPU utilization?

  • ✓ B. Cloud Monitoring alert with Cloud Run webhook to adjust Spanner processing units

The correct option is Cloud Monitoring alert with Cloud Run webhook to adjust Spanner processing units.

This approach is native because it uses Monitoring to watch Cloud Spanner CPU utilization metrics and fires an alert when thresholds are crossed. The alert invokes a Cloud Run endpoint through a webhook and that service can call the Cloud Spanner Admin API to update the instance processing units. This creates a feedback loop that reacts to actual load and changes capacity automatically.

Cloud Scheduler script to set Spanner capacity on a fixed schedule is not based on real time utilization. A calendar driven change cannot respond to sudden spikes or dips in CPU load and therefore does not provide true automatic scaling based on workload.

Enable Compute Engine autoscaler on the Spanner instance is not applicable because the Compute Engine autoscaler works with managed instance groups and virtual machine instances while Cloud Spanner is a fully managed service that is scaled through its own API and not through the Compute Engine autoscaler.

When a question asks about automatic scaling for a managed service look for a metrics driven workflow. Monitoring alerts plus a webhook to a serverless action that calls the service API is a common native pattern. Be wary of fixed schedules and features that only apply to virtual machine instance groups.

Question 11

In Cloud Storage, objects are frequently read for the first 30 days, then rarely accessed for the remainder of the first year, and must be retained for a total of three years. Which Object Lifecycle Management policy minimizes cost while meeting these requirements?

  • ✓ B. Keep objects in Standard for 30 days then move to Coldline until the object age is one year and finally transition to Archive for the next two years

The correct option is Keep objects in Standard for 30 days then move to Coldline until the object age is one year and finally transition to Archive for the next two years.

This policy matches the access pattern and cost profile. Standard fits the first month of frequent reads. After that, Coldline lowers storage costs while still allowing occasional access during the remainder of the first year, and it satisfies minimum storage duration requirements. Transitioning to Archive for the final two years provides the lowest storage cost when access is rare and the data must simply be retained. You would also set a three year bucket retention policy to enforce the retention requirement in addition to these lifecycle transitions.

Enable Autoclass with a three year retention policy is not an Object Lifecycle Management policy and it adds an Autoclass management fee. Autoclass is best when access is unpredictable, whereas the access pattern here is known, so explicit lifecycle transitions typically minimize total cost. A retention policy is still needed but that is independent of Autoclass or lifecycle rules.

Keep objects in Standard for 30 days then move directly to Archive for three years does not align with the need for occasional access during the rest of the first year. Archive retrieval is more expensive and slower than Coldline, so skipping Coldline would likely increase total cost for those rare reads even though Archive has the lowest storage price.

Map access patterns to classes and check minimum storage durations before defining lifecycle rules. Use a bucket retention policy to enforce how long data must be kept, and choose Autoclass when access is unpredictable but prefer explicit lifecycle transitions when the timeline is known.

Question 12

In Google Kubernetes Engine, a frontend needs a stable in cluster endpoint to connect to backend pods that scale from 6 to 24 replicas even as pods are rescheduled or restarted. What should you configure?

  • ✓ B. ClusterIP Service

The correct option is ClusterIP Service.

A ClusterIP Service provides a stable virtual IP and DNS name that are only reachable inside the cluster. Kubernetes continuously updates the endpoints behind the service as pods are added, removed, or restarted, so the address stays the same while traffic is load balanced across all ready replicas. This exactly matches the need for a single in cluster endpoint while the backend scales from 6 to 24 pods.

GKE Ingress is designed to manage external HTTP and HTTPS traffic and relies on an underlying Service to route to pods. It does not by itself offer a stable in cluster virtual IP for direct service to service communication.

NodePort Service is primarily for external access by opening a port on each node. It is unnecessary for purely internal communication and introduces external exposure that the scenario does not require.

Headless Service does not allocate a cluster IP and instead returns individual pod IPs through DNS. It does not provide a single stable virtual IP, so the client would need to handle discovery and balancing on its own.

Match keywords in the question to the right service type. If you see stable and in cluster endpoint, think ClusterIP. If you see external access or HTTP routing, consider options that expose services outside the cluster.

Question 13

Which Google Cloud service provides a fully managed cron scheduler to run jobs every 45 minutes and at 01:00 UTC each day?

  • ✓ B. Cloud Scheduler

The correct option is Cloud Scheduler because it is the fully managed cron service that can run jobs every 45 minutes and at 01:00 UTC each day.

With this service you define standard cron schedules and it reliably triggers HTTP endpoints or publishes messages to Pub or Sub. You can set one job to run every 45 minutes and another to run daily at the specified time which meets the requirement. It also lets you choose time zones and configure retries for robust automation.

Cloud Tasks is a managed task queue that delivers asynchronous tasks to workers and it does not provide a cron style scheduler to initiate runs at fixed times.

Workflows orchestrates steps across services and APIs but it does not include a built in scheduler and it typically relies on an external trigger.

Cloud Functions provides event driven compute to run code and it is not a scheduling service and instead is commonly triggered by a scheduler when you need timed execution.

When a question mentions a need for cron or specific run times, map that to the managed scheduling service. Distinguish between queuing and orchestration so you can eliminate services that are not responsible for time based triggers.

Question 14

Which Cloud Storage feature can automatically retain objects for at least 5 years, move objects older than 5 but less than 10 years to Archive Storage, and delete objects older than 10 years?

  • ✓ B. Cloud Storage Lifecycle policies

The correct option is Cloud Storage Lifecycle policies.

With lifecycle policies you define time based rules that automatically manage objects. You can keep objects for at least five years by not applying any delete action before that time, then transition objects older than five years to the Archive Storage class, and finally delete objects once they are older than ten years. This can be configured with an age condition of 1825 days for a SetStorageClass to Archive action and another age condition of 3650 days for a Delete action.

Autoclass automatically changes storage class based on access patterns and cost optimization. It does not let you express rules that move data after a specific number of years and it does not delete objects, so it cannot meet the move at five years and delete at ten years requirement.

Bucket Lock with retention policy enforces immutability by preventing deletion before a retention period expires. It does not transition data to the Archive Storage class and it does not schedule deletions when the period ends, so it cannot implement the required movement and cleanup workflow.

When a question describes time based actions such as move after N years and delete after M years, map it to lifecycle rules. If it emphasizes preventing deletion or immutability think retention policies and Bucket Lock. If it highlights dynamic class changes from access patterns think Autoclass.

Question 15

An organization needs an automated cost summary that runs every 10 days, groups costs by project, and produces a report that can be shared. What should they implement?

  • ✓ C. Export Cloud Billing to BigQuery then schedule aggregations and use Looker Studio auto refresh

The correct option is Export Cloud Billing to BigQuery then schedule aggregations and use Looker Studio auto refresh. It supports an automated cost summary every 10 days grouped by project and it produces a shareable report.

Exporting billing data to BigQuery gives you detailed cost records with project identifiers, which makes grouping by project straightforward with SQL. You can create a scheduled query to aggregate spend every 10 days and write the results to a reporting table. A Looker Studio report connected to that table can automatically refresh and can be shared with links or scheduled emails.

Cloud Monitoring dashboards focus on infrastructure and application metrics and logs rather than detailed billing line items. They do not run scheduled SQL aggregations and they are not the right tool for a periodic cost report.

Cloud Billing budget with Pub/Sub and Cloud Functions to BigQuery is built for threshold based alerts and notifications rather than time based summaries. This design adds unnecessary complexity and still does not give a reliable 10 day aggregation by project from the full billing dataset.

Cloud Billing reports in the console provide ad hoc visualizations inside the console and they are not built for automated delivery on a 10 day cadence or for broad sharing outside the console. They also lack the flexible transformation that SQL in BigQuery provides.

When you see a need for periodic cost aggregation and a shareable report, map the solution to Billing export to BigQuery with scheduled queries plus a BI report. Budgets signal thresholds and Monitoring is for metrics.

Question 16

Which Cloud Storage configuration minimizes cost for PDF files that are frequently accessed during the first 30 days and rarely accessed afterward while maintaining low latency for occasional reads?

  • ✓ B. Standard with lifecycle to Nearline after 30 days

The correct option is Standard with lifecycle to Nearline after 30 days. This setup aligns with the hot then cool access pattern, keeps reads fast during the first month, and then reduces storage costs once access becomes infrequent while still preserving low latency for the occasional read.

Standard provides the best performance for frequent access during the initial 30 days. A lifecycle rule that moves the objects to Nearline after that period lowers storage cost while keeping millisecond access and high availability. Occasional reads will incur retrieval charges, yet with rare access those charges are small compared to the ongoing storage savings, so this approach minimizes total cost without sacrificing responsiveness.

Enable Autoclass on a bucket is not the best fit when the access pattern and timing are clearly known. That feature optimizes automatically, which can shift data to colder classes based on observed behavior and can introduce unpredictable retrieval costs. An explicit lifecycle rule gives you precise control over when to transition and which class to use, which better satisfies the low latency requirement for occasional reads.

Multi-region then transition to Archive after 30 days increases storage cost without a stated need for global resilience and it uses a class with higher retrieval charges and a long minimum storage duration. Occasional reads from that class can become expensive, so this choice does not minimize cost for the described pattern.

When the access pattern is known, choose an explicit lifecycle that matches the hot window and target class. Pick Nearline when occasional reads must remain low latency and reserve colder classes for extremely rare access where retrieval fees will be negligible.

Question 17

In Compute Engine, what is the correct order of steps to create a managed instance group with autoscaling using an instance template?

  • ✓ B. Create an instance template, create a managed instance group from the template, then configure autoscaling on the group

The correct option is Create an instance template, create a managed instance group from the template, then configure autoscaling on the group.

This sequence matches how Compute Engine works. A managed instance group is created from an instance template because the template holds the VM configuration such as machine type and image. Autoscaling is configured on the managed instance group after it is created because the autoscaler attaches to the group resource and uses signals such as CPU utilization or load balancing capacity.

Create an instance template and enable autoscaling on the template, then create a managed instance group is incorrect because autoscaling is not a property of an instance template. Templates only define VM configuration and the autoscaler can be attached only to a managed instance group.

Create a regional managed instance group directly from a custom image and enable autoscaling is incorrect because you cannot create a managed instance group directly from an image. You must first create an instance template from that image, then create the regional group from the template, and finally configure autoscaling on the group.

When a question mixes templates, groups, and autoscaling, remember that autoscaling belongs to the managed instance group and that the group must come from an instance template. Check that the flow is template to group to autoscaler.

Question 18

A Compute Engine subnetwork has exhausted its available internal IP addresses. What is the least disruptive way to add capacity and create a new VM?

  • ✓ B. Expand the subnet’s primary CIDR using subnet expansion

The correct option is Expand the subnet’s primary CIDR using subnet expansion.

Subnet expansion increases the primary IP range of an existing subnetwork without recreating it and without interrupting running virtual machines. After expansion new internal addresses become available and you can immediately create a new virtual machine in the same subnet which will receive an IP from the enlarged range. Because the subnet remains the same object your routes firewall rules peering and service attachments continue to function and there is no need to change instance configurations. You only need to choose a larger non overlapping CIDR that fits within the network plan.

Create a new subnetwork in the same VPC and place the VM there is not the least disruptive choice because it does not add capacity to the exhausted subnet and it can require new firewall rules or routing adjustments. It also places the new virtual machine in a different subnet which might not align with existing policies or service attachments.

Add a secondary IP range and use alias IPs for the VM is incorrect because a new virtual machine still needs a free primary internal IP from the subnet’s primary range. Secondary ranges and alias IPs provide additional addresses on an interface or support workloads like Google Kubernetes Engine pods but they do not replace the required primary address.

Recreate the subnetwork with a larger CIDR and migrate workloads is the most disruptive approach because it requires rebuilding the subnet changing IPs and migrating resources which leads to downtime and broader configuration changes.

When a subnet runs out of internal IPs and the question emphasizes least disruptive changes look for options that preserve the existing subnet object. Remember that alias ranges do not provide the primary IP needed to create a new virtual machine.

Question 19

How can you prevent the accidental deletion of a specific Compute Engine virtual machine?

  • ✓ B. Enable VM deletion protection

The correct option is Enable VM deletion protection. This sets a flag on the instance that blocks delete operations until it is explicitly cleared, which directly prevents accidental deletion of that specific VM.

This control is enforced by the Compute Engine API and the console, and it applies only to the selected instance. You can enable it when creating the VM or by updating an existing instance, and delete operations will fail until protection is turned off.

Add to a managed instance group with autohealing is not correct because autohealing restores unhealthy instances but it does not prevent intentional or accidental deletion. A group can also delete and recreate instances as part of scaling or repairs, so it does not provide per‑instance protection against deletion.

Create a custom IAM role without compute.instances.delete is not correct because removing the permission from one role does not guarantee protection. Other users or service accounts may still hold roles that include the delete permission, and this approach is not tied to a single VM the way Enable VM deletion protection is.

Set a project IAM deny for compute.instances.delete is not correct because a project-level deny would block deletion across the entire project and can disrupt operations. It is not targeted to one VM, and the simpler and recommended control for a single instance is deletion protection.

When a question focuses on protecting a single VM, favor instance-level features like deletion protection rather than broad IAM changes that affect many resources.

Question 20

Which Google Cloud service provides synchronous replication across multiple regions with strong consistency and automatic regional failover for a transactional database?

  • ✓ C. Cloud Spanner multi region

The correct option is Cloud Spanner multi region because it delivers synchronous replication across multiple regions with strong consistency and automatic regional failover for transactional workloads.

This service commits writes using replicas in different regions so transactions are synchronously replicated and reads are strongly consistent by default. If a region becomes unavailable it automatically fails over to healthy replicas in another region without manual intervention and without data loss. It is a fully managed relational database that supports ACID transactions and horizontal scalability while maintaining consistency.

Compute Engine snapshots are backups of persistent disks and not a transactional database. They do not provide synchronous multi region replication or strong consistency for live database traffic.

Cloud SQL cross region replica uses asynchronous replication for read replicas across regions which is eventually consistent and it does not provide automatic regional failover. High availability is primarily within a region rather than synchronous across regions.

Bigtable multi cluster replication replicates data asynchronously across clusters and is eventually consistent across regions. Bigtable is not a transactional relational database and it does not offer the strong consistency across regions that the question requires.

When a question emphasizes synchronous replication across regions with strong consistency and automatic failover think of Spanner multi region. Cloud SQL replicas are asynchronous and Bigtable multi cluster is eventually consistent.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.