Google Cloud Engineer Certification exam dumps and braindumps

All GCP questions are from my Google Engineer Udemy course and certificationexams.pro
Free GCP Certification Exam Topics Tests
Despite the title of this article, this is not a braindump in the traditional sense.
For ethical preparation, begin with authentic resources like Real Associate Cloud Engineer Certification Exam Questions and a comprehensive bank of GCP Associate Cloud Engineer Questions and Answers that reflect real-world scenarios without copying the actual exam.
Every question is written to align with the Associate Cloud Engineer exam objectives and to mirror the tone and depth of realistic Google Cloud tasks.
Use GCP Associate Cloud Engineer Practice Questions and focused GCP Cloud Engineer Associate Sample Questions to build skill in provisioning resources, configuring IAM, setting up networks, deploying applications, and troubleshooting common issues.
Google Cloud Certification Practice Exams
If you want a timed environment, train with the Google Associate Cloud Engineer Exam Simulator and finish with a full-length Google Cloud Engineer Associate Practice Test to measure readiness. If you encounter any exam dump claims, remember the goal is to learn the material properly.
The curated Google Cloud Engineer Associate Certification Braindump style study sets here are built to teach, not to cheat, and focus on the reasoning behind correct answers.
About GCP Exam Dumps
Success on the Associate Cloud Engineer exam comes from understanding how identity, networking, compute, storage, and operations work together on Google Cloud.
With consistent practice, ethical study habits, and high-quality questions, you will be prepared to pass the exam and apply these skills in production environments.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Google Cloud Engineer Associate Exam Dump
Question 1
In the Google Cloud Pricing Calculator, for a GKE workload that requires very high IOPS and uses disk snapshots for recovery, what should you add next to complete the estimate?
-
❏ A. GPUs and GKE control plane cost
-
❏ B. Filestore Enterprise
-
❏ C. Local SSD count plus PD and snapshots
-
❏ D. Persistent Disk Extreme without snapshots
Question 2
Which Google Cloud design converts each uploaded image into an optimized binary and stores it at low cost with minimal operational overhead while scaling to approximately 120,000 images per day?
-
❏ A. Cloud Pub/Sub with Cloud Run
-
❏ B. Filestore with Cloud Functions
-
❏ C. Cloud Storage with a Cloud Functions finalize trigger
-
❏ D. Firestore with Cloud Functions
Question 3
Which Google Cloud approach automatically scales to handle seasonal spikes of up to six times while optimizing costs and maintaining resilience?
-
❏ A. Google Kubernetes Engine
-
❏ B. Serverless using Cloud Run and Cloud Functions
-
❏ C. Regional managed instance groups with Cloud Load Balancing
-
❏ D. Preemptible VMs with managed instance groups
Question 4
What is the best approach to grant analysts query access to BigQuery with least privilege and straightforward administration given that contractors rotate every 60 days?
-
❏ A. Use a Google Group with BigQuery Metadata Viewer at the project level
-
❏ B. Use a Google Group with BigQuery Data Viewer on required datasets
-
❏ C. Use a Google Group with BigQuery Job User at the project level
Question 5
How should you provide private access from on premises networks to Cloud Storage using hybrid connectivity without traversing the public internet while following Google best practices?
-
❏ A. Deploy Cloud NAT with Private Google Access on a subnet and route on premises traffic through it
-
❏ B. Use Cloud VPN or Interconnect with Cloud Router, advertise 199.36.153.4/30, and map *.googleapis.com to restricted.googleapis.com
-
❏ C. Create Private Service Connect for Google APIs and send on premises traffic to it without hybrid connectivity
Question 6
An organization needs to process 5 million clickstream events each hour and ingest a 300 GB partner file every night. The data must be stored durably and support large scale SQL analytics with minimal operational overhead. Which approach should it implement?
-
❏ A. Cloud SQL as the analytics warehouse
-
❏ B. BigQuery with Dataflow for streaming and batch ingestion
-
❏ C. Dataproc with Spark and Cloud Storage
-
❏ D. Cloud Bigtable with Pub/Sub ingestion
Question 7
An App Engine application uses Cloud Pub/Sub, and its service account already has publish and subscribe permissions, but the Cloud Pub/Sub API is disabled in the project. What should you do?
-
❏ A. Rely on first request auto enablement at runtime
-
❏ B. Enable App Engine Admin API instead
-
❏ C. Enable Cloud Pub/Sub API in APIs and Services
-
❏ D. Grant Pub/Sub Admin to the service account and enable the API in code
Question 8
Compute Engine VMs are healthy and serving traffic, but their application logs are not appearing in Cloud Logging. What is the most likely cause and how should you fix it?
-
❏ A. VM service account lacks Logs Writer so grant logging.logWriter
-
❏ B. Missing or stopped Ops Agent so install and start it
-
❏ C. An exclusion filter in Log Router drops those logs so remove the exclusion
-
❏ D. No egress to the Logging API because Private Google Access is disabled so enable it
Question 9
Which Cloud Storage approaches allow temporary write only uploads for 30 minutes, automatically delete the objects after 60 days, and prevent partners from accessing each other’s objects? (Choose 2)
-
❏ A. Build a scheduled Cloud Run job that deletes files older than 60 days
-
❏ B. Generate V4 signed URLs for write only uploads for 30 minutes
-
❏ C. Set a Cloud Storage bucket retention policy to 60 days
-
❏ D. Use a Cloud Storage lifecycle rule to delete objects after 60 days
-
❏ E. Use IAM Conditions to grant Storage Object Creator for 30 minutes
Question 10
You must run a 48-hour batch job composed of independent tasks on Google Cloud, and the solution must be low cost and tolerant of interruptions. Which approach ensures reliable completion with minimal cost?
-
❏ A. Standalone preemptible Compute Engine VMs
-
❏ B. Google Kubernetes Engine with preemptible nodes
-
❏ C. Managed instance group with preemptible VM template and CPU autoscaling
-
❏ D. Dataproc with preemptible workers

All GCP questions are from my Google Engineer Udemy course and certificationexams.pro
Question 11
What is the recommended way to grant external auditors read only browsing access to the Google Cloud organization, folders, and projects while adhering to IAM best practices?
-
❏ A. Grant roles/browser on each project to auditor accounts
-
❏ B. Create a Google Group and assign roles/browser at the organization level
-
❏ C. Assign roles/cloudasset.viewer at the organization level
-
❏ D. Create a group and assign roles/viewer at the organization level
Question 12
Which fully managed Google Cloud service should be used to ingest approximately ten million events per hour and deliver them to multiple subscribers with low latency?
-
❏ A. Cloud Dataflow
-
❏ B. Cloud Pub/Sub
-
❏ C. Pub/Sub Lite
Question 13
Which export destination supports retaining Cloud Audit Logs for 10 years and performing ad hoc SQL analysis?
-
❏ A. Cloud Storage
-
❏ B. BigQuery
-
❏ C. Cloud Spanner
-
❏ D. Cloud Bigtable
Question 14
Which tool provides fully automated and consistent provisioning of multiple Google Cloud projects across development, QA, UAT, and production environments?
-
❏ A. Config Connector with GitOps
-
❏ B. Workflows orchestrating gcloud
-
❏ C. Terraform with reusable modules and env variables
-
❏ D. Deployment Manager with separate configs
Question 15
For GKE batch jobs that use temporary local PersistentVolumes and can be safely interrupted and restarted, which deployment option offers the lowest cost?
-
❏ A. GKE cluster autoscaler
-
❏ B. GKE Autopilot
-
❏ C. GKE node pool with preemptible VMs
-
❏ D. GKE Vertical Pod Autoscaling
Question 16
An external auditor needs read access to Google Cloud Data Access and Access Transparency logs and must retain copies for 30 months while adhering to least privilege. How should you grant this access and meet the retention requirement?
-
❏ A. Grant the auditor the Project Viewer role and export logs to BigQuery via a sink
-
❏ B. Grant roles/iam.securityReviewer and export logs to BigQuery
-
❏ C. Assign roles/logging.privateLogViewer and export logs to Cloud Storage via a sink
-
❏ D. Grant roles/logging.viewer and export logs to Cloud Storage via a sink
Question 17
Which configuration of VPC firewall egress rules will allow only traffic to specified destination ports while blocking all other outbound traffic?
-
❏ A. Use VPC Service Controls
-
❏ B. Create an egress deny all with priority 65400 and an egress allow for the required ports with priority 200
-
❏ C. Create only egress allows for the required ports and rely on an implied egress deny
Question 18
A Compute Engine workload has steady traffic but periodically spikes to six times normal for about 30 minutes. How can you maintain performance during these spikes while minimizing costs once traffic returns to normal?
-
❏ A. Compute Engine capacity reservation
-
❏ B. Managed instance group autoscaling on metrics with min and max limits
-
❏ C. Committed use discounts for peak capacity
Question 19
Which Google Cloud service should host a legacy application that requires direct control of the operating system and network stack, supports long lived TCP connections, and cannot be refactored or containerized?
-
❏ A. App Engine Flexible
-
❏ B. Compute Engine
-
❏ C. Google Kubernetes Engine
Question 20
You need to migrate a 120 TB on premises archive to Cloud Storage over a 150 Mbps connection and require a reliable and cost effective initial transfer. Which approach aligns with Google Cloud best practices?
-
❏ A. Storage Transfer Service
-
❏ B. Transfer Appliance
-
❏ C. Partner Interconnect
Question 21
Which gcloud commands list all enabled APIs across every project in your organization within the last 45 days?
-
❏ A. Run gcloud projects get-list then gcloud services list –available –project PROJECT_ID
-
❏ B. Run gcloud services list –enabled –organization ORG_ID
-
❏ C. Run gcloud projects list then for each project run gcloud services list –enabled –project PROJECT_ID
-
❏ D. Use Cloud Asset Inventory export to BigQuery
Question 22
Which Cloud Storage class offers the lowest cost for data stored for 10 years and accessed only rarely for compliance reviews?
-
❏ A. Autoclass
-
❏ B. Coldline
-
❏ C. Archive class
-
❏ D. Nearline
Question 23
What should you do to ensure a Compute Engine VM retains the same internal IP address across restarts while keeping costs low?
-
❏ A. Cloud NAT
-
❏ B. Reserve a static internal IP and assign it to the VM
-
❏ C. Ephemeral internal IP only
-
❏ D. Reserve a static external IP
Question 24
Your company needs a private connection from its on premises network to a Google Cloud VPC with predictable performance and capacity that can scale to 25 Gbps. What should you implement?
-
❏ A. Cloud VPN
-
❏ B. Use Cloud Interconnect for private and predictable connectivity
-
❏ C. Direct Peering
Question 25
In Google Cloud, how should you isolate the production environment from the development environment to ensure there is no connectivity or shared resources between them?
-
❏ A. Shared VPC for both environments
-
❏ B. Separate project for production with no cross-project networking
-
❏ C. New VPC in the existing project without peering
-
❏ D. VPC Network Peering between prod and dev
GCP Certified Associate Cloud Engineer Braindump Answers

All GCP questions are from my Google Engineer Udemy course and certificationexams.pro
Question 1
In the Google Cloud Pricing Calculator, for a GKE workload that requires very high IOPS and uses disk snapshots for recovery, what should you add next to complete the estimate?
-
✓ C. Local SSD count plus PD and snapshots
The correct option is Local SSD count plus PD and snapshots.
This choice matches the requirements because very high IOPS is best met by adding local SSDs on the GKE nodes while recovery based on disk snapshots requires persistent disks with snapshot charges included in the estimate. This combination ensures the calculator reflects both peak performance and the ongoing cost of durable storage and snapshots that the workload depends on.
GPUs and GKE control plane cost does not address storage performance or snapshot needs. GPUs accelerate compute and the control plane charge is not what determines IOPS or snapshot pricing, so adding these would not complete a storage estimate for this scenario.
Filestore Enterprise is a managed NFS file service and uses its own backup features rather than persistent disk snapshots. It is not the right fit when the requirement explicitly centers on PD snapshots and block storage performance for very high IOPS.
Persistent Disk Extreme without snapshots can provide strong IOPS for some workloads but the scenario relies on snapshots for recovery. Omitting snapshots would miss an essential cost component and it may still fall short of the peak IOPS that local SSDs provide for the stated need.
Link the requirement words to the storage features. If you see very high IOPS think local SSDs and if you see reliance on snapshots make sure persistent disks and snapshot charges are in the estimate.
Question 2
Which Google Cloud design converts each uploaded image into an optimized binary and stores it at low cost with minimal operational overhead while scaling to approximately 120,000 images per day?
-
✓ C. Cloud Storage with a Cloud Functions finalize trigger
The correct design is Cloud Storage with a Cloud Functions finalize trigger. This approach runs an event driven function whenever a new object is successfully written, transforms the image, and persists the optimized binary back to low cost object storage while automatically scaling to handle around 120000 images per day with very little operational burden.
Uploads land in a bucket and the finalize event fires when the write completes. The function can optimize or transcode the image and write the result to a destination bucket or a different path. There are no servers to manage and billing is per invocation and storage. The platform scales horizontally to meet spikes in daily volume. Object storage provides inexpensive and durable persistence and integrates natively with this trigger model, which keeps the design simple and cost effective.
The option Cloud Pub/Sub with Cloud Run adds extra moving parts because you must wire bucket notifications to a topic and manage subscriptions and delivery. You still need to choose and operate a storage layer for the resulting binaries. This is not the most minimal or lowest cost path for straightforward image processing on upload.
The option Filestore with Cloud Functions is not suitable because Filestore is an NFS service intended for Compute Engine and GKE and it cannot be mounted by Cloud Functions. This incompatibility prevents using it for event driven image processing.
The option Firestore with Cloud Functions is a poor fit for binary images because Firestore is a document database with strict document size limits and it is not optimized for storing or serving large blobs. Using it would be costly and constrained compared with object storage.
When a requirement says process files on upload with minimal operations prefer an event driven trigger on native storage. Look for a bucket finalize trigger that runs a short function and writes the result back to object storage.
Question 3
Which Google Cloud approach automatically scales to handle seasonal spikes of up to six times while optimizing costs and maintaining resilience?
-
✓ B. Serverless using Cloud Run and Cloud Functions
The correct option is Serverless using Cloud Run and Cloud Functions.
Cloud Run and Cloud Functions scale automatically in response to traffic and events, which allows them to absorb sudden seasonal spikes that are many times above normal load. They scale down to zero when idle and charge primarily for actual usage, which keeps costs efficient during quiet periods while still being able to burst quickly when demand rises.
Both Cloud Run and Cloud Functions are regional managed services that run across multiple zones, which increases resilience against zonal disruptions. They require no capacity planning or instance warm up and they integrate with Google Cloud load balancing and networking so they can deliver reliable and elastic performance with minimal operational effort.
Google Kubernetes Engine can autoscale pods and nodes, yet you still manage clusters and nodes and often maintain a baseline capacity. This adds operational overhead and idle cost, and scaling new nodes can be slower than fully managed serverless, which makes it less ideal for sharp seasonal spikes with strict cost efficiency goals.
Regional managed instance groups with Cloud Load Balancing can scale using VM autoscaling policies and distribute traffic, but instance startup time, minimum instance counts, and capacity buffers commonly lead to higher idle costs and slower reaction to sudden surges compared to serverless. You also carry more responsibility for reliability and operations.
Preemptible VMs with managed instance groups reduce compute price but they can be reclaimed at any time which undermines resilience requirements. This approach is not well suited for unpredictable seasonal bursts where reliability is critical, even though the group can scale.
When you see keywords like seasonal spikes, cost efficiency, and resilience, favor serverless options that scale to zero and scale out automatically without capacity planning.
Question 4
What is the best approach to grant analysts query access to BigQuery with least privilege and straightforward administration given that contractors rotate every 60 days?
-
✓ B. Use a Google Group with BigQuery Data Viewer on required datasets
The correct option is Use a Google Group with BigQuery Data Viewer on required datasets.
This choice applies the principle of least privilege because it grants read access only to the specific datasets analysts need. Using a Google Group keeps administration simple since you add or remove contractors from the group when they rotate rather than editing many individual bindings. Dataset level scoping avoids project wide access and limits blast radius while still allowing analysts to query the tables they are permitted to read.
Use a Google Group with BigQuery Metadata Viewer at the project level is not appropriate because it only grants access to view metadata such as dataset and table names and schemas and it does not permit reading table data. It is also project wide which is broader than necessary.
Use a Google Group with BigQuery Job User at the project level is insufficient because it allows creating and running jobs but it does not grant permission to read dataset contents. It is also project level which violates least privilege for rotating contractors.
Map the action to the role. Reading data requires a data viewer role on the data resource and not a metadata or job role. Use Google Groups to simplify frequent user turnover and scope access at the dataset level for least privilege.
Question 5
How should you provide private access from on premises networks to Cloud Storage using hybrid connectivity without traversing the public internet while following Google best practices?
-
✓ B. Use Cloud VPN or Interconnect with Cloud Router, advertise 199.36.153.4/30, and map *.googleapis.com to restricted.googleapis.com
The correct option is Use Cloud VPN or Interconnect with Cloud Router, advertise 199.36.153.4/30, and map *.googleapis.com to restricted.googleapis.com.
This approach establishes private transport over hybrid connectivity with Cloud VPN or Interconnect and uses Cloud Router to advertise the restricted Google APIs VIP so on premises hosts send traffic to Google APIs and Cloud Storage over the private link. Mapping the wildcard domain to restricted.googleapis.com ensures DNS resolution to the restricted VIP and keeps traffic off the public internet while aligning with Google best practices for Private Google Access for on premises.
Deploy Cloud NAT with Private Google Access on a subnet and route on premises traffic through it is incorrect because Private Google Access on a subnet only applies to VMs in that subnet and does not provide private access for on premises hosts. Cloud NAT also does not NAT traffic that originates on premises to reach Google APIs and it is not the mechanism for this use case.
Create Private Service Connect for Google APIs and send on premises traffic to it without hybrid connectivity is incorrect because on premises networks cannot reach a Private Service Connect endpoint in your VPC without a hybrid connection such as Cloud VPN or Interconnect. The option explicitly removes hybrid connectivity which makes it unsuitable for private access from on premises.
When the requirement is on premises to Google APIs without public internet, look for hybrid connectivity with Cloud Router advertising the restricted VIP 199.36.153.4/30 and DNS mapping of *.googleapis.com to restricted.googleapis.com. Avoid answers that rely on Cloud NAT or subnet Private Google Access for on premises traffic.
Question 6
An organization needs to process 5 million clickstream events each hour and ingest a 300 GB partner file every night. The data must be stored durably and support large scale SQL analytics with minimal operational overhead. Which approach should it implement?
-
✓ B. BigQuery with Dataflow for streaming and batch ingestion
The correct approach is BigQuery with Dataflow for streaming and batch ingestion.
BigQuery is a fully managed analytics warehouse that provides durable storage and executes large scale SQL with high concurrency while requiring very little administration. It scales automatically and separates storage from compute which makes it ideal for this volume of clickstream and nightly batch data and it satisfies the need for minimal operations overhead.
Dataflow offers a unified model for both streaming and batch pipelines so it can continuously process about 5 million events per hour from Pub/Sub and load the 300 GB nightly files from Cloud Storage into BigQuery. It provides autoscaling and built in reliability which keeps ingestion resilient and reduces operational effort.
Cloud SQL as the analytics warehouse is not appropriate because it is optimized for transactional workloads and not for very large analytical scans or very high concurrency. You would need ongoing tuning and capacity management which increases operations overhead and it still would not match the scale and performance required for large SQL analytics.
Dataproc with Spark and Cloud Storage can process data but you must size and manage clusters and jobs which adds operational burden. It does not provide a serverless warehouse and SQL would rely on additional components which makes it a poorer fit when minimal operations overhead is required.
Cloud Bigtable with Pub/Sub ingestion is designed for low latency key value or time series workloads and it does not offer native SQL analytics. You would need to move the data into BigQuery for warehousing which adds complexity and does not meet the goal of keeping operations overhead low.
When you see both streaming and batch ingestion with a need for minimal operations overhead and large scale SQL analytics, pair a serverless ingestion service with a serverless warehouse. Think Dataflow for pipelines and BigQuery for analytics.
Question 7
An App Engine application uses Cloud Pub/Sub, and its service account already has publish and subscribe permissions, but the Cloud Pub/Sub API is disabled in the project. What should you do?
-
✓ C. Enable Cloud Pub/Sub API in APIs and Services
The correct option is Enable Cloud Pub/Sub API in APIs and Services.
Even though the service account already has publish and subscribe permissions, the API must be enabled at the project level for requests to succeed. Enabling the Cloud Pub/Sub API activates the service so the existing IAM permissions can be used by your App Engine application.
Rely on first request auto enablement at runtime is incorrect because Google Cloud does not automatically enable an API the first time your code calls it. Calls to a disabled API will fail until the API is explicitly enabled.
Enable App Engine Admin API instead is incorrect because the App Engine Admin API manages App Engine resources and does not enable or substitute for the Pub/Sub service. Your application still needs the Cloud Pub/Sub API enabled to use Pub/Sub.
Grant Pub/Sub Admin to the service account and enable the API in code is incorrect because granting a broader role does not enable a disabled API. Enabling an API is a project setting that should be done through APIs and Services or via the Service Usage API with appropriate project-level permissions, which is unnecessary here compared to simply enabling the Cloud Pub/Sub API in the console.
When troubleshooting access issues, confirm that the needed service API is enabled even if IAM roles are correct. Check the APIs and Services page first before changing permissions or code.
Question 8
Compute Engine VMs are healthy and serving traffic, but their application logs are not appearing in Cloud Logging. What is the most likely cause and how should you fix it?
-
✓ B. Missing or stopped Ops Agent so install and start it
The correct option is Missing or stopped Ops Agent so install and start it. Application logs from Compute Engine require the Ops Agent to collect and send them to Cloud Logging, so if it is absent or not running then the logs will not appear.
The Ops Agent is the unified agent for logs and metrics on Compute Engine. When it is not installed or has stopped, your applications can keep serving traffic normally yet no application logs are shipped to Cloud Logging. Installing and starting the Ops Agent and confirming its configuration for your application log files resolves the issue. The legacy agents have been replaced, so use the Ops Agent going forward.
VM service account lacks Logs Writer so grant logging.logWriter is less likely because most environments grant the needed permission to the instance service account, and a missing role would typically prevent all agent writes and surface clear permission errors rather than only causing missing application logs. The more common cause in this scenario is simply that no agent is running.
An exclusion filter in Log Router drops those logs so remove the exclusion is unlikely without evidence of a custom exclusion because exclusions act at ingestion and usually affect broader log categories. By default there is no exclusion that silently removes only application logs from otherwise healthy instances.
No egress to the Logging API because Private Google Access is disabled so enable it is not the most likely cause because Private Google Access is only required for instances without public egress. Many instances have a public IP or use Cloud NAT, and if there were no egress then other control plane communications and monitoring would often be affected as well.
When only application logs are missing from Compute Engine, check the Ops Agent status first. Permissions or network issues usually affect many log types, while a missing or stopped agent commonly explains why otherwise healthy VMs show no app logs.
Question 9
Which Cloud Storage approaches allow temporary write only uploads for 30 minutes, automatically delete the objects after 60 days, and prevent partners from accessing each other’s objects? (Choose 2)
-
✓ B. Generate V4 signed URLs for write only uploads for 30 minutes
-
✓ D. Use a Cloud Storage lifecycle rule to delete objects after 60 days
The correct options are Generate V4 signed URLs for write only uploads for 30 minutes and Use a Cloud Storage lifecycle rule to delete objects after 60 days.
With signed URLs you can grant a partner temporary permission to upload using a V4 URL that expires after 30 minutes. The URL authorizes only the specific upload operation with PUT or POST and does not grant any read or list permissions, which prevents access to other partners’ objects.
A lifecycle rule that uses a Delete action with an Age condition of 60 days removes objects automatically when they reach the specified age. This is managed by Cloud Storage and requires no custom code or scheduling.
Build a scheduled Cloud Run job that deletes files older than 60 days is not needed and does not provide a controlled 30 minute write only window or strong isolation between partners. It adds operational overhead compared to a native lifecycle policy.
Set a Cloud Storage bucket retention policy to 60 days prevents deletion before 60 days but does not automatically delete at 60 days and does not help with temporary write only uploads. It can also block early removals that you might need.
Use IAM Conditions to grant Storage Object Creator for 30 minutes is not the recommended pattern for partner uploads because it requires managing time bound IAM bindings and allows uploads anywhere within the granted scope during the window. It also lacks the simple per object distribution and isolation that signed URLs provide.
When you see requirements for temporary upload access think signed URLs for short lived write permissions and think lifecycle rules for automatic deletion. Differentiate these from retention policies which prevent deletion but do not delete objects for you.
Question 10
You must run a 48-hour batch job composed of independent tasks on Google Cloud, and the solution must be low cost and tolerant of interruptions. Which approach ensures reliable completion with minimal cost?
-
✓ C. Managed instance group with preemptible VM template and CPU autoscaling
The correct option is Managed instance group with preemptible VM template and CPU autoscaling because it provides the lowest price from preemptible instances while a managed group keeps replacing any terminated instances and autoscaling maintains capacity so the batch finishes reliably over the full run.
This approach uses a managed instance group that automatically recreates preempted or failed instances which preserves the desired size of the fleet throughout the 48 hour window. CPU based autoscaling adds or removes instances in response to load so you get enough workers when tasks are pending and you save money when the queue drains. Since preemptible instances can be terminated at any time and at most after one day the group quickly recreates them so independent and interruption tolerant tasks continue to make progress with minimal manual effort.
Standalone preemptible Compute Engine VMs are inexpensive but there is no management layer to recreate instances when they are preempted which means manual intervention and gaps in capacity that can delay or jeopardize completion.
Google Kubernetes Engine with preemptible nodes can run interruption tolerant workloads but it adds control plane cost and operational complexity and pods are evicted on preemption which requires extra job controllers and configuration to recover work so it is not the simplest or cheapest way to ensure completion for this use case.
Dataproc with preemptible workers is designed for Hadoop or Spark where preemptible workers are secondary and losing them reduces resources and can slow or destabilize long jobs and you still pay for the masters and primary workers so it is not the minimal cost or most reliable fit for generic independent batch tasks.
When you see interruption tolerant batch work that must run for many hours at low cost look for a managed fleet using preemptible or spot instances with autoscaling. The manager ensures replacement on termination and autoscaling keeps cost in check. If the option lacks management it is usually risky for long runs.
Question 11
What is the recommended way to grant external auditors read only browsing access to the Google Cloud organization, folders, and projects while adhering to IAM best practices?
-
✓ B. Create a Google Group and assign roles/browser at the organization level
The correct option is Create a Google Group and assign roles/browser at the organization level.
Granting roles/browser at the highest level lets auditors list and view metadata for the entire hierarchy without granting access to application data. Inheritance ensures the permission automatically applies to all existing and future folders and projects. Managing access through a group centralizes membership changes and avoids repeatedly editing IAM bindings, which aligns with least privilege and operational best practices.
Grant roles/browser on each project to auditor accounts creates unnecessary toil and risk because every new project requires a manual grant and a missed project would block auditor visibility. It also grants access directly to user accounts rather than through a group, which is harder to manage and audit.
Assign roles/cloudasset.viewer at the organization level only provides permissions to the Cloud Asset Inventory service and does not include the Resource Manager list and get permissions needed to browse the organization, folders, and projects in the console.
Create a group and assign roles/viewer at the organization level is overly permissive because it includes broad read access to many services and data. Auditors only need to browse metadata, so this violates the principle of least privilege compared to roles/browser.
When a question asks for read-only visibility across the entire hierarchy, think inheritance at the organization level and prefer managing identities with Google Groups. Choose the least privilege predefined role that achieves the task and remember that roles/browser is designed for metadata browsing.
Question 12
Which fully managed Google Cloud service should be used to ingest approximately ten million events per hour and deliver them to multiple subscribers with low latency?
-
✓ B. Cloud Pub/Sub
The correct option is Cloud Pub/Sub.
Cloud Pub/Sub is a fully managed global messaging service that ingests high volumes of events and delivers them to many independent subscribers with low latency. It supports multiple subscriptions on a topic for fan out and offers both push and pull delivery patterns, which fits the need to distribute about ten million events per hour efficiently.
Cloud Dataflow is a managed service for building and running stream and batch processing pipelines. It is not a message broker for low latency fan out and typically serves to transform or enrich data rather than act as the primary ingestion and distribution layer.
Pub/Sub Lite is not fully managed because you must provision capacity and manage zones, and its topics are zonal with different availability and management trade offs. The question requires a fully managed service with low latency fan out, so this option does not meet the requirement even though it can handle high throughput.
Scan for keywords like fully managed, global, low latency, and fan out to match the service profile. Pick the messaging broker when the goal is to deliver events to many subscribers and use the processing engine only when the task is to transform or enrich streams.
Question 13
Which export destination supports retaining Cloud Audit Logs for 10 years and performing ad hoc SQL analysis?
-
✓ B. BigQuery
The correct option is BigQuery because it can retain Cloud Audit Logs for long periods and supports direct ad hoc SQL analysis.
This service is a native Cloud Logging export destination so you can create a sink that routes audit logs into a dataset. You can use time partitioned tables and configure dataset or partition expiration to satisfy a 10 year retention policy while maintaining efficient queries. It provides standard SQL with serverless scaling which is ideal for exploratory analysis of large log volumes.
Cloud Storage can store data for 10 years at low cost but it does not offer a built in SQL engine. You would need external tables or additional pipelines which adds complexity and does not meet the need for straightforward ad hoc SQL analysis.
Cloud Spanner is a transactional relational database and is not a supported Cloud Logging sink. It is intended for OLTP workloads rather than large scale log analytics and flexible ad hoc querying of append only logs.
Cloud Bigtable is a NoSQL wide column store and is not a direct Cloud Logging sink. It lacks native SQL querying which makes it unsuitable for interactive SQL analysis of audit logs.
Match the requirement to the strength of the service. If you see ad hoc SQL on logs choose BigQuery. If you only need long term archival then Cloud Storage is the fit.
Question 14
Which tool provides fully automated and consistent provisioning of multiple Google Cloud projects across development, QA, UAT, and production environments?
-
✓ C. Terraform with reusable modules and env variables
The correct option is Terraform with reusable modules and env variables because it enables fully automated and consistent provisioning of multiple Google Cloud projects across development, QA, UAT, and production.
This approach uses declarative infrastructure as code with reusable modules and input variables so you can standardize project baselines while varying environment specific settings. It maintains state and provides plan and apply workflows that are idempotent which supports repeatability and drift detection across environments. You can integrate it with continuous integration and continuous delivery and with policy controls to enforce consistency and compliance at scale.
Config Connector with GitOps can manage Google Cloud resources declaratively but it relies on a Kubernetes control plane and a reconciliation loop. It is not ideal for bootstrapping and uniformly provisioning many standalone projects because it introduces cluster dependencies and complex cross project permissions which adds operational overhead compared to a dedicated infrastructure as code workflow.
Workflows orchestrating gcloud is an imperative script driven approach that lacks native state management, planning, and reusable modules. It can automate steps but does not reliably ensure idempotent and consistent infrastructure creation across environments.
Deployment Manager with separate configs offers templates but separate configurations per environment reduce reuse and increase the risk of drift. In addition, Deployment Manager with separate configs is in maintenance mode and Google recommends using Terraform which makes it a less likely choice on newer exams.
When the question asks for consistency across many environments, favor tools that provide declarative workflows, reusable modules, and centralized state. If an option lacks a previewable plan or true idempotency, it is usually not the best fit for reliable multi environment provisioning.
Question 15
For GKE batch jobs that use temporary local PersistentVolumes and can be safely interrupted and restarted, which deployment option offers the lowest cost?
-
✓ C. GKE node pool with preemptible VMs
The correct option is GKE node pool with preemptible VMs.
This approach is the most cost effective for batch jobs that can be safely interrupted and restarted because these preemptible VMs are deeply discounted. Temporary local PersistentVolumes used as scratch space can be recreated when the job restarts, so losing data on preemption is acceptable. A dedicated node pool of preemptible VMs lets you target just the interruption tolerant jobs and maximize savings while keeping other workloads on standard capacity.
GKE cluster autoscaler only adjusts the number of nodes to fit pending pods and it does not change the price of the nodes themselves. It can save cost by scaling down when idle, but it does not provide the steep discounts that interruptible capacity offers.
GKE Autopilot bills per pod resources and does not support preemptible nodes, and it also has restrictions that do not allow Local PersistentVolumes. This makes it less suitable for temporary local storage patterns and less cost effective for interruption tolerant batch jobs.
GKE Vertical Pod Autoscaling rightsizes CPU and memory requests to reduce waste, but it does not leverage low cost interruptible nodes and it does not address the temporary local storage pattern, so it is not the most cost effective choice for this scenario.
When a workload is interruption tolerant and uses temporary storage, think of preemptible or Spot capacity first to capture the largest savings. If a feature like Local PersistentVolumes is required, verify support by the chosen mode such as Autopilot before deciding.

All GCP questions are from my Google Engineer Udemy course and certificationexams.pro
Question 16
An external auditor needs read access to Google Cloud Data Access and Access Transparency logs and must retain copies for 30 months while adhering to least privilege. How should you grant this access and meet the retention requirement?
-
✓ C. Assign roles/logging.privateLogViewer and export logs to Cloud Storage via a sink
The correct choice is Assign roles/logging.privateLogViewer and export logs to Cloud Storage via a sink. This grants only the permissions needed to read Data Access and Access Transparency entries and it uses a Cloud Storage export to keep copies for 30 months.
The Private Logs Viewer role lets an auditor read all logs in Cloud Logging including private logs. Data Access and Access Transparency logs are private, so this role is the least privilege option that still satisfies the read requirement. It avoids broad project visibility that is unnecessary for an external audit.
Exporting to Cloud Storage with a log sink is the straightforward way to meet a 30 month retention target. You can set a bucket retention policy or lifecycle rules to keep the exported copies for exactly the required duration, which extends well beyond the default Cloud Logging retention windows.
Grant the auditor the Project Viewer role and export logs to BigQuery via a sink is not least privilege because Project Viewer grants read access to many resources beyond logs, and BigQuery is not required for simple long term retention.
Grant roles/iam.securityReviewer and export logs to BigQuery is incorrect because the Security Reviewer role focuses on viewing IAM configuration and does not grant access to private logs such as Data Access or Access Transparency.
Grant roles/logging.viewer and export logs to Cloud Storage via a sink is insufficient because Logs Viewer cannot read private logs, so the auditor would still be unable to read Data Access and Access Transparency entries.
When a question mentions Data Access or Access Transparency and asks for least privilege, look for Private Logs Viewer for read access and think of a Cloud Storage sink with bucket retention or lifecycle policies for long term copies.
Question 17
Which configuration of VPC firewall egress rules will allow only traffic to specified destination ports while blocking all other outbound traffic?
-
✓ B. Create an egress deny all with priority 65400 and an egress allow for the required ports with priority 200
The correct option is Create an egress deny all with priority 65400 and an egress allow for the required ports with priority 200.
This configuration uses firewall rule priority correctly. Lower numbers are evaluated first. The allow rule with priority 200 matches the required destination ports and permits that traffic. The deny all rule with priority 65400 then blocks all other egress traffic and it overrides the default implied egress allow because it has a higher priority than the implied rule. This results in only the specified destination ports being permitted while everything else is blocked.
Use VPC Service Controls is incorrect because VPC Service Controls protect access to Google managed services and are not used to enforce VM level egress port restrictions in VPC firewall rules.
Create only egress allows for the required ports and rely on an implied egress deny is incorrect because there is no implied egress deny in Google Cloud VPC. There is an implied egress allow by default. Without an explicit deny all rule, all other outbound traffic would continue to be allowed.
Remember that there is an implied egress allow by default. To restrict outbound traffic, add an explicit deny all egress and place your specific allow rules at a lower priority number since lower numbers have higher precedence.
Question 18
A Compute Engine workload has steady traffic but periodically spikes to six times normal for about 30 minutes. How can you maintain performance during these spikes while minimizing costs once traffic returns to normal?
-
✓ B. Managed instance group autoscaling on metrics with min and max limits
The correct option is Managed instance group autoscaling on metrics with min and max limits.
This approach monitors CPU or load balancer utilization and automatically adds instances during the 30 minute spikes then removes them when demand falls, which maintains performance and minimizes cost after the surge. You can set a small minimum for steady traffic and a maximum near six times normal so autoscaling expands quickly but never exceeds your budget.
Compute Engine capacity reservation guarantees capacity in a zone but it does not scale on demand and you continue to pay for the reserved cores and memory even when idle, so it is not cost effective for short spikes.
Committed use discounts for peak capacity require a long term commitment that fits steady usage patterns and you pay for the commitment even when not using it, which makes it a poor fit for brief bursts.
When a question mentions spikes and minimizing cost after demand drops, prefer autoscaling with min and max settings. Choose reservations or commitments only for steady usage.
Question 19
Which Google Cloud service should host a legacy application that requires direct control of the operating system and network stack, supports long lived TCP connections, and cannot be refactored or containerized?
-
✓ B. Compute Engine
The correct option is Compute Engine because it gives you full virtual machines with direct operating system and network stack control which is ideal for a legacy application that must maintain long lived TCP connections and cannot be refactored or containerized.
With Compute Engine you can configure the operating system, install agents or drivers, tune kernel and network parameters, and manage persistent connections. You also control networking features such as firewall rules and routing which aligns with applications that expect server style control and stability.
The App Engine Flexible environment is a managed platform that abstracts most of the operating system and networking. It does not offer full control of the underlying virtual machine and it is designed for applications that fit the platform model rather than legacy software that needs deep OS access or custom network tuning.
The Google Kubernetes Engine service requires packaging the application into containers and adopting Kubernetes primitives. This does not fit when you cannot refactor or containerize the workload and it still does not provide direct host level control over the operating system and network stack.
Map the requirement to the level of control. If you see direct OS control or long lived TCP connections with no refactoring possible then choose Compute Engine. If the app can be containerized then consider GKE and if it can fit a managed platform then consider App Engine.
Question 20
You need to migrate a 120 TB on premises archive to Cloud Storage over a 150 Mbps connection and require a reliable and cost effective initial transfer. Which approach aligns with Google Cloud best practices?
-
✓ B. Transfer Appliance
The correct option is Transfer Appliance because it provides a dependable and cost effective way to seed a large 120 TB dataset into Cloud Storage when you only have a 150 Mbps link.
At 150 Mbps your effective throughput is about 18.75 megabytes per second which is roughly 1.6 terabytes per day under ideal conditions. Moving 120 terabytes would take around two and a half months or longer once you account for overhead and interruptions. Transfer Appliance lets you load the data locally and ship the device to Google for direct ingest into Cloud Storage which shortens the initial migration window and reduces risk while offering encryption and predictable pricing for large transfers.
Storage Transfer Service moves data over the network and can be excellent for ongoing synchronization or for environments with ample bandwidth. For an initial 120 terabyte move over 150 Mbps it would be slow and operationally burdensome which makes it a poor fit for a dependable and cost effective first pass.
Partner Interconnect provides private connectivity to your VPC through a service provider but it is not a data transfer tool by itself. It requires lead time and ongoing costs and even then bulk transfers would still be limited by the provisioned bandwidth which does not solve the initial 120 terabyte ingest efficiently.
Quickly estimate transfer time by dividing data size by usable bandwidth. If the answer is weeks then prefer an offline ingest for the first load and use online tools later for incremental changes.
Question 21
Which gcloud commands list all enabled APIs across every project in your organization within the last 45 days?
-
✓ C. Run gcloud projects list then for each project run gcloud services list –enabled –project PROJECT_ID
The correct option is Run gcloud projects list then for each project run gcloud services list –enabled –project PROJECT_ID.
This approach first enumerates all projects that you can access and then queries each project for its enabled services. The services list command with the enabled flag returns only APIs that are currently turned on for that specific project. Since API enablement is a project level setting, iterating projects is the reliable way to cover the entire organization using gcloud.
Run gcloud projects get-list then gcloud services list –available –project PROJECT_ID is incorrect because the projects get-list command does not exist and the services list command with the available flag lists services that could be enabled rather than those that are actually enabled.
Run gcloud services list –enabled –organization ORG_ID is incorrect because the services list command operates at the project level and does not support an organization flag. APIs are enabled per project, so you must specify a project.
Use Cloud Asset Inventory export to BigQuery is incorrect in this context because the question asks for gcloud commands. Cloud Asset Inventory can help with organization wide inventory if you have export feeds set up, but it is not a direct gcloud listing command and its built in history view has limited retention.
When a question asks for organization wide API inventory with gcloud, remember that API enablement is a project level setting. List projects first, then list enabled services for each project.
Question 22
Which Cloud Storage class offers the lowest cost for data stored for 10 years and accessed only rarely for compliance reviews?
-
✓ C. Archive class
The correct option is Archive class because it minimizes storage cost when data is kept for many years and is accessed only for rare compliance reviews.
The Archive class is designed for long term retention with the lowest storage price among Cloud Storage classes. It has higher retrieval and early deletion costs and a one year minimum storage duration, which aligns well with a ten year retention policy. Since access is rare, the occasional retrieval cost remains small compared to the sustained savings in storage price, and objects are available immediately when needed for audits.
The Autoclass option is not a storage class and instead it is a management feature that automatically moves objects between classes based on observed access. When you already know the data will be retained for a decade with very rare reads, selecting Archive class directly simplifies the design and avoids unnecessary transitions and additional management cost.
The Coldline class targets data accessed less than once per quarter and it has a higher storage price than the Archive class. For very infrequent access over ten years, the most cost effective choice is not Coldline.
The Nearline class is intended for data accessed about once per month and it is priced higher than both Coldline and the Archive class, so it does not minimize cost in this scenario.
Match the storage class to the stated access frequency and retention period. If the data is rarely accessed and kept for long-term retention, choose the archival tier and verify minimum storage duration and retrieval fees to confirm the lowest total cost.
Question 23
What should you do to ensure a Compute Engine VM retains the same internal IP address across restarts while keeping costs low?
-
✓ B. Reserve a static internal IP and assign it to the VM
The correct option is Reserve a static internal IP and assign it to the VM.
This approach reserves a regional internal address and binds it to the instance so the internal address persists across restarts and stop or start cycles. It keeps costs low because internal addresses are not billed in typical usage and you avoid the need for external resources.
Cloud NAT is for providing internet egress for private instances and it does not assign or reserve a VM’s internal address, so it cannot guarantee the same internal IP across restarts.
Ephemeral internal IP only does not meet the requirement because an ephemeral internal address can change when the VM is stopped and started.
Reserve a static external IP controls the external address rather than the internal one and can introduce unnecessary cost, so it does not solve the need to keep the same internal IP.
Confirm whether the requirement is for an internal or external address and whether it must survive a stop and start. Choose static for persistence and match the address scope to the need.
Question 24
Your company needs a private connection from its on premises network to a Google Cloud VPC with predictable performance and capacity that can scale to 25 Gbps. What should you implement?
-
✓ B. Use Cloud Interconnect for private and predictable connectivity
The correct option is Use Cloud Interconnect for private and predictable connectivity.
This choice gives you private connectivity to your VPC with predictable performance and an availability SLA. It supports link aggregation so you can bundle multiple high speed circuits to meet or exceed 25 Gbps, which satisfies the scalability requirement in a reliable way.
Cloud VPN sends traffic over the public internet, which means performance is not predictable. Even with multiple tunnels, it does not provide the same deterministic capacity and consistency that the requirement calls for.
Direct Peering connects your network to Google at the edge for access to Google public services and it does not provide private IP reachability to your VPC. It is therefore not suitable for private VPC connectivity with predictable capacity.
Map keywords to the product. Choose Cloud Interconnect when you see requirements for private VPC access, predictable performance, and high throughput such as 25 Gbps. Choose Cloud VPN for encrypted connectivity over the internet when variability is acceptable, and use peering options only when accessing Google public services.
Question 25
In Google Cloud, how should you isolate the production environment from the development environment to ensure there is no connectivity or shared resources between them?
-
✓ B. Separate project for production with no cross-project networking
The correct answer is Separate project for production with no cross-project networking.
This approach places production in its own project which is the strongest isolation boundary on Google Cloud. Projects have distinct IAM policies, service accounts, quotas, billing, and audit logs. When you also avoid any cross project networking, there is no route for connectivity and no ability to share VPC resources which satisfies the requirement of no connectivity or shared resources.
Shared VPC for both environments is incorrect because it centralizes networking in a host project and allows attached projects to use the same VPC. That design intentionally enables shared network resources and connectivity across environments which violates the requirement for full isolation.
New VPC in the existing project without peering is incorrect because it remains within the same project boundary. Even if there is no direct network peering, resources still share the same project level IAM, quotas, and administrative surface which does not meet the requirement to avoid shared resources.
VPC Network Peering between prod and dev is incorrect because peering explicitly creates private connectivity between the two VPC networks. This directly contradicts the requirement that there be no connectivity between production and development.
When you see phrases like no connectivity and no shared resources choose project level isolation and avoid any cross project networking features. Map the requirement strength to the strongest boundary which is usually a separate project.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.