Google Cloud Architect Certification Practice Exams

Free GCP Certification Exam Topics Tests

Over the past few months, I’ve been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who have been displaced by AI and ML technologies learn new skills and accreditations by getting them certified on technologies that are in critically high demand.

In my opinion, one of the most reputable organizations providing credentials is Google, and one of their most respected designations is that of the Certified Google Cloud Architect Professional.

So how do you get Google certified and get Google certified quickly? I have a simple plan that has now helped thousands, and it’s a pretty simple strategy.

Google Cloud Certification Practice Exams

First, pick your designation of choice. In this case, it’s Google’s Professional Cloud Architect certification.

Then look up the exam objectives and make sure they match your career goals and competencies. The next step? It’s not buying an online course or study guide. Next, find a Google Cloud Architect exam simulator or a set of practice questions for the GCP Architect exam.

Yes, find a set of Cloud Architect sample questions first and use them to drive your study.

First, go through your practice tests and just look at the GCP exam questions and answers.

That will help you get familiar with what you know and what you don’t know.

When you find topics you don’t know, use AI and Machine Learning powered tools like ChatGPT, Cursor, or Claude to write tutorials for you on the topic.

Really take control of your learning and have the new AI and ML tools help you customize your learning experience by writing tutorials that teach you exactly what you need to know to pass the exam. It’s an entirely new way of learning.

About GCP Exam Dumps

And one thing I will say is try to avoid the Google Cloud Architect Professional exam dumps. You want to get certified honestly, you don’t want to pass simply by memorizing somebody’s GCP Architect braindump. There’s no integrity in that.

If you do want some real Google Cloud Architect Professional exam questions, I have over a hundred free exam questions and answers on my website, with almost 300 free exam questions and answers if you register. But there are plenty of other great resources available on LinkedIn Learning, Udemy, and even YouTube, so check those resources out as well to help fine-tune your learning path.

The bottom line? Generative AI is changing the IT landscape in disruptive ways, and IT professionals need to keep up. One way to do that is to constantly update your skills.
Get learning, get certified, and stay on top of all the latest trends. You owe it to your future self to stay trained, stay employable, and stay knowledgeable about how to use and apply all of the latest technologies.

Now for the GCP Certified Architect Professional exam questions.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Google Cloud Architect Professional Practice Exams

Question 1

Nimbus Playworks is preparing to launch a new mobile multiplayer title and has rebuilt its backend on Google Compute Engine with a managed NoSQL store to improve global scale and uptime. Before release, the team needs to validate the Android and iOS clients across dozens of OS versions and hundreds of device models while keeping testing costs under control and minimizing operational effort. Which approach should they use to perform broad and efficient device coverage testing across both platforms?

  • ❏ A. Set up Android and iOS virtual machines on Google Cloud and install the app for manual and scripted tests

  • ❏ B. Use Google Kubernetes Engine to run Android and iOS emulators in containers and execute automated UI tests

  • ❏ C. Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices

  • ❏ D. Upload app builds with different configurations to Firebase Hosting and validate behavior from hosted links

Question 2

Which Google Cloud architecture uses Cloud Load Balancing to stay low cost at about 200 daily requests yet reliably absorbs bursts up to 60,000 for public HTTP APIs and static content?

  • ❏ A. Use Cloud CDN for static content, run the APIs on App Engine Standard, and use Cloud SQL

  • ❏ B. Serve static assets through Cloud CDN, run the APIs on a Compute Engine managed instance group, and use Cloud SQL

  • ❏ C. Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL

  • ❏ D. Put static files in Cloud Storage, deploy the APIs to a regional GKE Autopilot cluster, and use Cloud Spanner

Question 3

LumaPay is moving over 30 internal applications to Google Cloud and the security operations group requires read only visibility across every resource in the entire organization for audit readiness. You already hold the Organization Administrator role through Resource Manager. Following Google recommended practices, which IAM roles should you grant to the security team?

  • ❏ A. Organization administrator, Project browser

  • ❏ B. Security Center admin, Project viewer

  • ❏ C. Organization viewer, Project viewer

  • ❏ D. Organization viewer, Project owner

Question 4

Which business risks should you consider when adopting Google Cloud Deployment Manager for infrastructure automation? (Choose 2)

  • ❏ A. Requires Cloud Build to run deployments

  • ❏ B. Template errors can delete critical resources

  • ❏ C. Cloud Deployment Manager manages only Google Cloud resources

  • ❏ D. Must use a Google APIs service account

Question 5

After CedarPeak Data deployed a custom Linux kernel module to its Compute Engine batch worker virtual machines to speed up the overnight jobs, three days later roughly 60% of the workers failed during the nightly run. You need to collect the most relevant evidence so the developers can troubleshoot efficiently. Which actions should you take first? (Choose 3)

  • ❏ A. Check the activity log for live migration events on the failed instances

  • ❏ B. Use Cloud Logging to filter for kernel and module logs from the affected instances

  • ❏ C. Review the Compute Engine audit activity log through the API or the console

  • ❏ D. Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs

  • ❏ E. Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics

  • ❏ F. Enable Cloud Trace for the batch application and collect traces during the next window

Question 6

Two Google Cloud Shared VPC host networks in separate organizations have some overlapping IP ranges such as 10.60.0.0/16, and you need private connectivity only for nonoverlapping subnets with minimal redesign. What should you do?

  • ❏ A. Private Service Connect with internal load balancers

  • ❏ B. HA VPN with Cloud Router advertising only nonoverlapping prefixes

  • ❏ C. VPC Network Peering with custom route exchange

  • ❏ D. Dedicated Interconnect with custom route advertisements

Question 7

Orchid Outfitters operates a three layer application in a single VPC on Google Cloud. The web, service and database layers scale independently using managed instance groups. Network flows must go from the web layer to the service layer and then from the service layer to the database layer, and there must be no direct traffic from the web layer to the database layer. How should you configure the network to meet these constraints?

  • ❏ A. Place each layer in separate subnetworks within the VPC

  • ❏ B. Configure Cloud Router with custom dynamic routes and use route priorities to force traffic to traverse the service tier

  • ❏ C. Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database

  • ❏ D. Add tags to each layer and create custom routes to allow only the desired paths

Question 8

Which Google Cloud storage option minimizes cost for telemetry that will be rarely accessed for the next 18 months?

  • ❏ A. Stream telemetry to BigQuery partitions with long term storage pricing

  • ❏ B. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline

  • ❏ C. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Nearline

  • ❏ D. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Archive

Question 9

VerdantCart wants to move a mission critical web application from an on premises facility to Google Cloud and it must continue serving users if an entire region goes offline with automatic traffic failover. How should you design the deployment?

  • ❏ A. Run the application on a single Compute Engine VM and attach an external HTTP(S) Load Balancer to handle failover between instances

  • ❏ B. Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups

  • ❏ C. Place the application in two Compute Engine managed instance groups in different regions that live in different projects and configure a global HTTP(S) Load Balancer to shift traffic between them

  • ❏ D. Launch two standalone Compute Engine VMs in separate regions within one project and set up an HTTP(S) Load Balancer to route traffic from one VM to the other when needed

Question 10

How should you migrate Windows Server 2022 Datacenter VMs to Google Cloud so you can keep using existing Microsoft volume licenses in compliance?

  • ❏ A. Compute Engine Windows image

  • ❏ B. Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node

  • ❏ C. Migrate to Virtual Machines with license included images

  • ❏ D. Import disk and run on shared tenancy

Question 11

The compliance team at HarborView Insurance needs to retain Cloud VPN log events for 18 months to meet audit obligations. You must configure Google Cloud so these logs are stored appropriately. What should you do?

  • ❏ A. Build a Cloud Logging dashboard that shows Cloud VPN metrics for the past 18 months

  • ❏ B. Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention

  • ❏ C. Configure a Cloud Logging export that publishes matching entries to Pub/Sub

  • ❏ D. Enable firewall rule logging on the Compute Engine rules that handle VPN traffic

Question 12

An organization must validate disaster recovery every 60 days using only Google Cloud. Which approach enables repeatable full stack provisioning in a secondary region with actionable telemetry for each drill?

  • ❏ A. Terraform and Google Cloud Observability

  • ❏ B. Deployment Manager and Google Cloud Observability

  • ❏ C. gcloud scripts and Cloud Audit Logs

Question 13

SierraForge Industries is migrating telemetry files from field equipment into Cloud Storage and wants to keep each file for one year while reducing storage costs during that time. What lifecycle configuration should you implement?

  • ❏ A. Create one lifecycle rule that transitions objects to Nearline after 30 days and add a second rule that deletes objects when they reach 365 days

  • ❏ B. Create one lifecycle rule that transitions objects to Archive after 60 days and add a second rule that deletes objects when they reach 365 days

  • ❏ C. Create one lifecycle rule that transitions objects to Coldline after 45 days in Standard and create another lifecycle rule that deletes objects when they reach 366 days in Coldline

  • ❏ D. Create one lifecycle rule that transitions objects to Coldline after 180 days and add a second rule that deletes objects when they reach 1095 days

Question 14

In BigQuery what approach enables reliable deletion of a single individual’s health records upon request?

  • ❏ A. Cloud DLP with Data Catalog

  • ❏ B. Set table or partition expiration to 30 days

  • ❏ C. Use a stable user ID and delete rows by that ID

  • ❏ D. BigQuery dynamic data masking

Question 15

Your team at example.com is preparing to run a stateful service on Google Cloud that can scale out across multiple virtual machines. Every instance must read and write to the same POSIX file system and during peak periods the service needs to sustain up to 180 MB per second of write throughput. Which approach should you choose to meet these requirements while keeping the design managed and reliable?

  • ❏ A. Attach an individual persistent disk to each instance

  • ❏ B. Mount a Cloud Storage bucket on each instance using gcsfuse

  • ❏ C. Create a Cloud Filestore instance and mount it on all virtual machines

  • ❏ D. Set up an NFS server on a Compute Engine VM backed by a large SSD persistent disk and mount it from all instances

Question 16

Which Google Cloud services should you use for high velocity time series ingestion, transactional user profiles and game state, and interactive analytics on 30 TB of historical events?

  • ❏ A. Use Cloud Spanner for time series, use Cloud Spanner for transactions, and export to Cloud Storage for historical analytics

  • ❏ B. Use Cloud Pub/Sub for time series ingestion, use AlloyDB for transactions, and use Cloud Dataproc for historical analytics

  • ❏ C. Use Cloud Bigtable for time series, use BigQuery for historical analytics, and use Cloud Spanner for transactions

Question 17

Riverton Media is preparing to switch traffic to a new marketing site on Google Cloud. You created a managed instance group with autoscaling and attached it as a backend to an external HTTP(S) load balancer. After enabling the backend, you observe that the virtual machines are being recreated roughly every 90 seconds. The instances do not have public IP addresses, and you can successfully curl the service from an internal test host. What should you change to ensure the backend is configured correctly?

  • ❏ A. Add a network tag that matches the load balancer name and create a rule that allows sources with that tag to reach the instances

  • ❏ B. Assign a public IP to each VM and open a firewall rule so the load balancer can reach the public addresses

  • ❏ C. Create a firewall rule that allows traffic from Google health check IP ranges to the instance group on the configured health check ports

  • ❏ D. Increase the autoscaler cool down period so instances are not replaced as often

Question 18

Which Google Cloud solutions let development VMs retain data across reboots and provide ongoing spend visibility without manual reporting? (Choose 2)

  • ❏ A. Local SSD on Compute Engine

  • ❏ B. Export Cloud Billing data with labels to BigQuery and Looker Studio

  • ❏ C. Cloud Billing budgets and alerts

  • ❏ D. Compute Engine with persistent disks

Question 19

A travel booking platform at Northstar Tickets processes card payments and wants to reduce the PCI DSS scope to the smallest footprint while keeping the ability to analyze purchase behavior and payment method trends. Which architectural approach should you adopt to satisfy these requirements?

  • ❏ A. Export Cloud Logging to BigQuery and restrict auditor access using dataset ACLs and authorized views

  • ❏ B. Place all components that handle cardholder data into a separate Google Cloud project

  • ❏ C. Implement a tokenization service and persist only tokens in your systems

  • ❏ D. Create dedicated subnetworks and isolate the services that process cardholder data

  • ❏ E. Label every virtual machine that processes PCI data to simplify audit discovery

Question 20

How should you configure BigQuery IAM so analysts can run queries in the project and only read data in their country’s dataset?

  • ❏ A. Grant bigquery.jobUser and bigquery.dataViewer at the project to a global analysts group

  • ❏ B. Grant bigquery.jobUser at the project to a global analysts group and bigquery.dataViewer only on each country dataset to its country group

  • ❏ C. Use one shared dataset with row level security by country and grant bigquery.jobUser to all analysts

Professional GCP Solutions Architect Practice Exam Answers

Question 1

Nimbus Playworks is preparing to launch a new mobile multiplayer title and has rebuilt its backend on Google Compute Engine with a managed NoSQL store to improve global scale and uptime. Before release, the team needs to validate the Android and iOS clients across dozens of OS versions and hundreds of device models while keeping testing costs under control and minimizing operational effort. Which approach should they use to perform broad and efficient device coverage testing across both platforms?

  • ✓ C. Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices

The correct approach is Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices.

Firebase Test Lab is a managed testing service that offers broad device coverage for both Android and iOS with real and virtual devices, which fits the need to validate across many OS versions and device models. It supports automated frameworks like Espresso, XCTest and Robo, and it can run tests in parallel across a device matrix to speed feedback while reducing operational effort. Because it is managed and offers virtual devices and transparent pricing, it helps control costs while still letting you scale to hundreds of device and OS combinations.

Set up Android and iOS virtual machines on Google Cloud and install the app for manual and scripted tests is not appropriate because it would be highly manual and would not provide the required breadth of real device and OS coverage. Running iOS simulators requires macOS on Apple hardware, which standard Compute Engine virtual machines do not provide, so it would not meet the cross platform requirement or the goal of minimizing operational effort.

Use Google Kubernetes Engine to run Android and iOS emulators in containers and execute automated UI tests increases complexity and operational overhead without delivering real device coverage. iOS simulators are not supported in typical Linux containers and emulator only testing misses hardware and OEM variances that matter for games, so this does not meet the goal of broad and efficient device coverage across both platforms.

Upload app builds with different configurations to Firebase Hosting and validate behavior from hosted links is incorrect because Firebase Hosting serves web content and does not execute native Android or iOS applications. It does not provide device farms, test orchestration or automated UI testing capabilities.

Look for purpose built managed services when a question emphasizes broad device coverage, both Android and iOS, minimal operations and cost control. Firebase Test Lab is designed for this and keywords like real and virtual devices, device matrix and automated tests are strong indicators.

Question 2

Which Google Cloud architecture uses Cloud Load Balancing to stay low cost at about 200 daily requests yet reliably absorbs bursts up to 60,000 for public HTTP APIs and static content?

  • ✓ C. Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL

The correct option is Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL.

This design keeps steady state costs very low because Cloud Run scales to zero and bills per request, which matches a workload of about 200 daily requests. It also absorbs sudden spikes to tens of thousands of requests because Cloud Run automatically and quickly scales out with high concurrency. Cloud CDN serves static assets from the edge, which offloads traffic from the origin and helps handle large bursts with low latency. Cloud Load Balancing integrates with Cloud Run through serverless network endpoint groups and with Cloud CDN, which provides global anycast entry, fast TLS termination, and resilient distribution during bursts. Cloud SQL is a good fit for a transactional relational backend at this scale and cost profile.

Use Cloud CDN for static content, run the APIs on App Engine Standard, and use Cloud SQL can functionally work, yet it is less cost effective for very low steady traffic with rare large spikes. To avoid cold starts and meet burst needs you often configure minimum instances which adds fixed cost, and its scaling characteristics are not as rapid or cost optimized as Cloud Run for this pattern.

Serve static assets through Cloud CDN, run the APIs on a Compute Engine managed instance group, and use Cloud SQL requires virtual machines to be running even when request volume is low, which raises baseline cost. Instance group scaling also takes longer to add capacity during sudden spikes compared to the near instant scaling of Cloud Run, so this is not the best low cost choice for infrequent large bursts.

Put static files in Cloud Storage, deploy the APIs to a regional GKE Autopilot cluster, and use Cloud Spanner is significantly more expensive and operationally heavier for a workload with only hundreds of daily requests. Spanner is a global, high throughput database with a high minimum cost, and running APIs on GKE Autopilot adds complexity and baseline spend compared to a fully managed serverless platform. This combination does not align with the low cost requirement.

When you see very low steady traffic with rare large bursts, think serverless for APIs and CDN for static content. Avoid options that keep VMs warm or use premium databases like Spanner unless the question clearly requires their features.

Question 3

LumaPay is moving over 30 internal applications to Google Cloud and the security operations group requires read only visibility across every resource in the entire organization for audit readiness. You already hold the Organization Administrator role through Resource Manager. Following Google recommended practices, which IAM roles should you grant to the security team?

  • ✓ C. Organization viewer, Project viewer

The correct option is Organization viewer, Project viewer.

Organization viewer provides read only access to organization level metadata and policies through Resource Manager which gives the security team visibility into the hierarchy and constraints without the ability to change anything. Project viewer grants read only access to project resources so when it is granted at the organization node it inherits down to all folders and projects which provides consistent visibility across every application while still following least privilege.

Organization administrator, Project browser is incorrect because Organization administrator is a powerful administrative role that allows modifying IAM and resources which violates the read only requirement. Project browser only lists and gets metadata and does not provide broad read access to resource data which is often needed for audits.

Security Center admin, Project viewer is incorrect because Security Command Center admin is an administrative role and it only covers Security Command Center resources. It neither limits the team to read only actions nor guarantees visibility across all Google Cloud services.

Organization viewer, Project owner is incorrect because Project owner grants full control of projects including write and IAM changes which is not appropriate for an audit focused read only use case.

When you see a requirement for read only visibility across an entire organization think about inheritance from the organization node. Pair Organization viewer with Project viewer and avoid any role that includes admin or owner. Remember that Browser is limited to metadata and not full read access.

Question 4

Which business risks should you consider when adopting Google Cloud Deployment Manager for infrastructure automation? (Choose 2)

  • ✓ B. Template errors can delete critical resources

  • ✓ C. Cloud Deployment Manager manages only Google Cloud resources

The correct options are Template errors can delete critical resources and Cloud Deployment Manager manages only Google Cloud resources.

Template errors can delete critical resources is a real business risk because updates reconcile the live environment to what your configuration and templates declare. If a template removes or renames a resource or applies an unintended change then an update can delete or recreate production resources. You can mitigate this with previews and careful review but the risk remains if changes are pushed without safeguards.

Cloud Deployment Manager manages only Google Cloud resources is also a business risk because it limits you to Google Cloud resource types. This creates vendor lock in for your infrastructure automation and prevents you from using the same toolchain to orchestrate on premises or other cloud providers.

Requires Cloud Build to run deployments is incorrect because Deployment Manager runs through the API and gcloud and it does not require Cloud Build. You can optionally integrate it into Cloud Build pipelines but that is a choice rather than a requirement.

Must use a Google APIs service account is incorrect because you can deploy using your user credentials or a suitable service account with the necessary IAM roles. Google managed service agents may operate behind the scenes for specific services yet you are not forced to adopt a particular Google APIs service account as a business prerequisite.

When options include words like only or must check whether they imply lock in or hard requirements. Distinguish real operational risks such as unintended deletions from optional implementation details such as CI integration.

Question 5

After CedarPeak Data deployed a custom Linux kernel module to its Compute Engine batch worker virtual machines to speed up the overnight jobs, three days later roughly 60% of the workers failed during the nightly run. You need to collect the most relevant evidence so the developers can troubleshoot efficiently. Which actions should you take first? (Choose 3)

  • ✓ B. Use Cloud Logging to filter for kernel and module logs from the affected instances

  • ✓ D. Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs

  • ✓ E. Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics

The correct options are Use Cloud Logging to filter for kernel and module logs from the affected instances, Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs, and Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics.

Use Cloud Logging to filter for kernel and module logs from the affected instances gives developers immediate visibility into errors emitted by the custom kernel module and the Linux kernel. Filtering by instance identifiers and keywords related to kernel, module loading, crashes, or taints focuses the results on what changed and when. This surfaces stack traces, oops messages, and module load or unload failures that directly explain why workers failed.

Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs is crucial when VMs become unreachable or reboot during failures. The serial port captures early boot messages and kernel panics that may not reach system logs on disk, so it preserves decisive evidence even if the VM hung or crashed mid run.

Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics helps correlate log events with system behavior such as restarts, CPU spikes, memory exhaustion, or disk errors. Constraining the time range reduces noise and makes patterns across many workers visible so you can confirm scope and timing of the regression introduced by the kernel module.

Check the activity log for live migration events on the failed instances is not a first action because live migration is designed to be transparent and is unlikely to explain widespread module related kernel failures. Even if live migration occurred it would not provide the kernel level detail needed to debug a custom module issue.

Review the Compute Engine audit activity log through the API or the console focuses on administrative and access operations rather than guest OS behavior. Audit logs do not capture kernel panics or module crashes, so they are not the most relevant starting point for this incident.

Enable Cloud Trace for the batch application and collect traces during the next window targets application latency and request flows rather than OS or kernel events. It would not help diagnose the current failures and it introduces delay since it only gathers data in future runs.

When failures point to the operating system, start closest to the VM with serial console output, kernel logs, and a tight time window in logs and metrics. Correlate errors with restarts and resource spikes before exploring broader platform events.

Question 6

Two Google Cloud Shared VPC host networks in separate organizations have some overlapping IP ranges such as 10.60.0.0/16, and you need private connectivity only for nonoverlapping subnets with minimal redesign. What should you do?

  • ✓ B. HA VPN with Cloud Router advertising only nonoverlapping prefixes

The correct option is HA VPN with Cloud Router advertising only nonoverlapping prefixes. This approach provides private connectivity between the two VPC networks while letting you control which prefixes are exchanged so only nonoverlapping subnets are learned on each side and you avoid conflicts with minimal redesign.

This works because Cloud Router supports custom advertisements so you can advertise only the specific IP ranges that you want the other side to learn. When each side advertises only the nonoverlapping subnets, routing remains private and reachable for those ranges while overlapping ranges are never imported. This setup is supported for VPC to VPC connectivity across projects and even across organizations, and it requires no IP renumbering.

Private Service Connect with internal load balancers is for publishing and consuming specific services privately rather than providing general purpose network connectivity between arbitrary subnets. It is not intended to create broad bidirectional connectivity between two VPC networks, so it does not meet the requirement.

VPC Network Peering with custom route exchange does not support overlapping IP ranges and it has no route filtering to selectively block conflicting subnets. Overlapping prefixes cause conflicts and the routes are not imported, therefore it cannot satisfy the requirement.

Dedicated Interconnect with custom route advertisements is designed for private connectivity between on premises networks and Google Cloud. Using it to connect two VPC networks in different organizations would require additional infrastructure and complexity, which is not minimal redesign and is unnecessary for this use case.

When prefixes overlap, look for solutions that let you control advertisements with BGP. VPC Network Peering does not support overlapping IP ranges and has no route filtering, so prefer Cloud VPN with Cloud Router or Interconnect when selective exchange is required.

Question 7

Orchid Outfitters operates a three layer application in a single VPC on Google Cloud. The web, service and database layers scale independently using managed instance groups. Network flows must go from the web layer to the service layer and then from the service layer to the database layer, and there must be no direct traffic from the web layer to the database layer. How should you configure the network to meet these constraints?

  • ✓ C. Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database

The correct option is Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database.

This configuration uses firewall rules as the enforcement point for lateral traffic inside a VPC. You assign distinct tags to the instances in each managed instance group through their instance templates, then create an ingress allow rule that targets the service tier with a source tag that identifies the web tier. You also create an allow rule that targets the database tier with a source tag that identifies the service tier. To ensure there is no direct web to database path, add an explicit deny from the web tag to the database target with a higher priority than any broader allows, or rely on the absence of an allow so the implied deny blocks it.

This works because VPC routing is destination based and does not enforce hop by hop traversal. Firewall rules define which sources can reach which targets, and tags make it easy to select entire tiers in managed instance groups with consistent policy.

Place each layer in separate subnetworks within the VPC is insufficient on its own because subnet boundaries do not block traffic. Traffic can flow between subnets in the same VPC whenever firewall rules allow it, so you still need explicit rules to constrain which tiers can talk.

Configure Cloud Router with custom dynamic routes and use route priorities to force traffic to traverse the service tier does not meet the requirement because Cloud Router exchanges routes with peer networks using BGP and does not control instance to instance traffic inside a VPC. Route priority cannot force traffic to take a middle hop between tiers.

Add tags to each layer and create custom routes to allow only the desired paths is incorrect because VPC routes are destination based and cannot express policies that depend on both source and destination. Custom routes cannot prevent a directly reachable destination from being used, so they cannot reliably enforce tier by tier traversal. You need firewall rules for that control.

When you must allow some lateral flows but block others inside a VPC, think of VPC firewall rules targeted by network tags or service accounts. Remember that routes and Cloud Router do not enforce middle tier traversal for instance to instance traffic.

Question 8

Which Google Cloud storage option minimizes cost for telemetry that will be rarely accessed for the next 18 months?

  • ✓ B. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline

The correct option is Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline.

Coldline is designed for data that is accessed very infrequently and kept for months to years, which aligns with telemetry that will be rarely accessed over 18 months. It offers lower storage cost than classes intended for more frequent access while still providing immediate access when needed. The minimum storage duration for Coldline is reasonable for long lived telemetry and its retrieval and operation costs are appropriate when access is rare, which makes total cost lower than options optimized for more frequent reads.

Stream telemetry to BigQuery partitions with long term storage pricing is not optimal for cost minimization of rarely accessed raw telemetry because BigQuery is an analytics warehouse and not a low cost archival store. Streaming inserts and query based access add ongoing costs and BigQuery long term storage still tends to be more expensive than Google Cloud Storage archival classes for large volumes kept primarily for retention.

Compress telemetry every 30 minutes and store snapshots in Cloud Storage Nearline targets data accessed roughly once a month, which is more frequent than this workload requires. Its storage price is higher than Coldline, so it does not minimize cost for data that will be accessed only rarely over 18 months.

Compress telemetry every 30 minutes and store snapshots in Cloud Storage Archive is optimized for data accessed less than once per year and for long term preservation. For telemetry that may still need occasional investigation within the 18 month window, the higher retrieval and operation costs can outweigh the storage savings compared with Coldline, so it is not the best balance for minimizing total cost in this scenario.

Match the access pattern to the storage class. If data is accessed less than once per quarter use Coldline. If access is monthly use Nearline. Consider Archive only when access is extremely rare and retention is long, and always factor in minimum storage duration and retrieval fees when comparing total cost.

Question 9

VerdantCart wants to move a mission critical web application from an on premises facility to Google Cloud and it must continue serving users if an entire region goes offline with automatic traffic failover. How should you design the deployment?

  • ✓ B. Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups

The correct option is Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups.

This design delivers regional resilience because each region runs its own managed instance group and the global external HTTP(S) Load Balancer presents a single anycast IP to users. The load balancer continuously health checks the backends and automatically shifts traffic to the healthy region when a region becomes unavailable. Managed instance groups add autohealing and autoscaling so the application maintains capacity and replaces failed virtual machines without manual action.

Keeping both instance groups in one project keeps configuration, identity and access, logging, and troubleshooting straightforward. The global external HTTP(S) Load Balancer natively supports multi region backends within a single project which is the simplest way to meet the requirement for automatic cross region failover.

Run the application on a single Compute Engine VM and attach an external HTTP(S) Load Balancer to handle failover between instances is incorrect because a single virtual machine in one region cannot survive a regional outage and an external HTTP(S) Load Balancer needs multiple healthy backends to fail over to. This setup offers neither regional redundancy nor managed recovery.

Place the application in two Compute Engine managed instance groups in different regions that live in different projects and configure a global HTTP(S) Load Balancer to shift traffic between them is unnecessary for the goal. While cross project backends can be made to work with additional features, it adds complexity without improving availability for this scenario and the simpler same project design already meets the requirement.

Launch two standalone Compute Engine VMs in separate regions within one project and set up an HTTP(S) Load Balancer to route traffic from one VM to the other when needed is incorrect because standalone instances do not provide autohealing or autoscaling and the load balancer distributes traffic to backends rather than forwarding from one virtual machine to another. This design is brittle and does not ensure reliable failover.

When you read requirements for regional outage tolerance and automatic failover, think global external HTTP(S) load balancing with backends in different regions and use managed instance groups for autohealing and scaling. Prefer the simplest architecture that meets the goal which often keeps all backends in one project.

Question 10

How should you migrate Windows Server 2022 Datacenter VMs to Google Cloud so you can keep using existing Microsoft volume licenses in compliance?

  • ✓ B. Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node

The correct option is Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node.

This approach lets you bring your existing Microsoft volume licenses while staying compliant because Windows Server BYOL on Google Cloud requires running on dedicated hosts. Using BYOL with Windows Server is supported only when the VMs are placed on dedicated capacity, which is provided by sole tenant nodes. You import the Windows Server 2022 Datacenter image as BYOL and schedule the VMs onto dedicated nodes so your licenses remain isolated and auditable according to Microsoft terms.

Compute Engine Windows image is incorrect because Google provided images are license included and you would pay for the Windows license through Google Cloud rather than reusing your existing licenses.

Migrate to Virtual Machines with license included images is incorrect because license included choices bundle the Windows Server license and do not allow you to apply your existing volume licenses.

Import disk and run on shared tenancy is incorrect because BYOL for Windows Server is not permitted on shared tenancy and must run on dedicated hosts to meet Microsoft licensing requirements.

When a scenario mentions keeping existing Microsoft licenses, look for BYOL plus dedicated capacity. On Google Cloud that means Sole Tenant Node rather than license included images or shared tenancy.

Question 11

The compliance team at HarborView Insurance needs to retain Cloud VPN log events for 18 months to meet audit obligations. You must configure Google Cloud so these logs are stored appropriately. What should you do?

  • ✓ B. Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention

The correct option is Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention.

This approach uses a log sink to route only Cloud VPN log entries so you retain precisely what auditors need. Exporting to Cloud Storage gives durable and cost effective storage. You can set a bucket retention policy to 18 months which prevents early deletion and meets the compliance requirement. This also avoids the default retention limits of Cloud Logging so the logs remain available for the full audit window.

Build a Cloud Logging dashboard that shows Cloud VPN metrics for the past 18 months is incorrect because dashboards visualize data and do not store it. They cannot extend log retention and cannot guarantee that raw log entries are kept for 18 months.

Configure a Cloud Logging export that publishes matching entries to Pub/Sub is incorrect because Pub/Sub is a messaging service and not an archival store. Message retention is limited to days and without an additional storage destination it cannot satisfy an 18 month retention requirement.

Enable firewall rule logging on the Compute Engine rules that handle VPN traffic is incorrect because firewall logs record allow and deny decisions for firewall rules rather than Cloud VPN tunnel events. Enabling this does not address the need to retain Cloud VPN logs for 18 months.

When a question emphasizes log retention for months or years, look for an export sink to durable storage such as Cloud Storage with a retention policy. Remember that Pub/Sub is for streaming and dashboards are for visualization.

Question 12

An organization must validate disaster recovery every 60 days using only Google Cloud. Which approach enables repeatable full stack provisioning in a secondary region with actionable telemetry for each drill?

  • ✓ B. Deployment Manager and Google Cloud Observability

The correct option is Deployment Manager and Google Cloud Observability.

Deployment Manager and Google Cloud Observability provides a native and repeatable way to define full stack infrastructure as templates and to deploy it consistently in a secondary region. This meets the requirement to validate disaster recovery on a fixed cadence while remaining entirely within Google Cloud. The observability suite gives you logs, metrics, traces, dashboards, and alerting so teams get actionable telemetry to verify that the drill succeeded and to troubleshoot issues.

Terraform and Google Cloud Observability is not the best fit because Terraform is a third party tool and the requirement specifies using only Google Cloud. Although it can achieve repeatable provisioning, it does not satisfy the constraint to stay within native services.

gcloud scripts and Cloud Audit Logs does not provide declarative and easily repeatable full stack provisioning. Imperative scripts are harder to standardize and maintain across drills, and Audit Logs focus on administrative activity rather than providing the health metrics, dashboards, and alerts that make telemetry actionable during recovery tests.

When you see a constraint that says only Google Cloud, prefer native services. Map phrases like repeatable full stack provisioning to declarative infrastructure tools and map actionable telemetry to monitoring with metrics, dashboards, and alerts rather than audit logs.

Question 13

SierraForge Industries is migrating telemetry files from field equipment into Cloud Storage and wants to keep each file for one year while reducing storage costs during that time. What lifecycle configuration should you implement?

  • ✓ C. Create one lifecycle rule that transitions objects to Coldline after 45 days in Standard and create another lifecycle rule that deletes objects when they reach 366 days in Coldline

The correct option is Create one lifecycle rule that transitions objects to Coldline after 45 days in Standard and create another lifecycle rule that deletes objects when they reach 366 days in Coldline. This configuration keeps each file for at least one full year and lowers storage costs by shifting to a colder class early while still respecting minimum storage duration requirements.

Coldline is designed for data accessed less than once per quarter and has a 90 day minimum storage duration. Moving after 45 days in Standard and deleting after 366 days means the object remains in Coldline for well over 90 days which avoids early deletion charges. Coldline is also cheaper than Nearline for long term infrequent access so it better meets the goal of reducing costs during the retention period. Using 366 days ensures at least one full year of retention and avoids off by one timing issues.

Create one lifecycle rule that transitions objects to Nearline after 30 days and add a second rule that deletes objects when they reach 365 days is not optimal because Nearline is more expensive than Coldline for long term infrequently accessed data. It meets the one year retention goal but does not minimize cost as effectively as Coldline.

Create one lifecycle rule that transitions objects to Archive after 60 days and add a second rule that deletes objects when they reach 365 days is risky for cost because Archive has a 365 day minimum storage duration. Deleting one year from creation would keep the object in Archive for far less than 365 days which triggers early deletion charges. Archive is best when you can keep data in that class for at least a year after transition and expect almost no reads.

Create one lifecycle rule that transitions objects to Coldline after 180 days and add a second rule that deletes objects when they reach 1095 days does not satisfy the requirement to keep data for only one year because it deletes after three years. It also delays the cost savings by waiting 180 days to move to a colder class.

Match the storage class to expected access and respect minimum storage duration rules. For one year retention choose a class that you can keep the object in long enough to avoid early deletion charges and set deletion age to at least one full year. Prioritize access frequency and total cost rather than only the per gigabyte price.

Question 14

In BigQuery what approach enables reliable deletion of a single individual’s health records upon request?

  • ✓ C. Use a stable user ID and delete rows by that ID

The correct option is Use a stable user ID and delete rows by that ID.

Using a durable identifier stored with each record allows you to run a precise DELETE statement with a WHERE clause that targets only the requesting individual. This satisfies per person deletion requirements because it can be executed at any time and removes only the matching rows. It works across partitioned or clustered tables and avoids reliance on data age or masking. It is also auditable and repeatable since you can query for the identifier to validate the deletion.

Cloud DLP with Data Catalog helps you discover, classify, and tag sensitive data and can de identify or mask content, but it does not perform targeted deletion of a specific subject’s rows in BigQuery.

Set table or partition expiration to 30 days enforces time based retention and deletes data after a set period, which cannot fulfill an on demand request to remove a single person’s records and risks deleting unrelated data.

BigQuery dynamic data masking hides values at query time based on policy, but the underlying data remains stored and therefore it does not satisfy a requirement to actually delete an individual’s records.

When the question asks to remove one person’s data, look for a targeted row level method that uses a stable identifier and supports an actual delete, not options that only classify, mask, or expire data by time.

Question 15

Your team at example.com is preparing to run a stateful service on Google Cloud that can scale out across multiple virtual machines. Every instance must read and write to the same POSIX file system and during peak periods the service needs to sustain up to 180 MB per second of write throughput. Which approach should you choose to meet these requirements while keeping the design managed and reliable?

  • ✓ C. Create a Cloud Filestore instance and mount it on all virtual machines

The correct choice is Create a Cloud Filestore instance and mount it on all virtual machines.

This approach provides a managed NFS file share that multiple virtual machines can mount at the same time which satisfies the requirement for a shared POSIX file system. It is designed for consistent performance and you can choose a tier and capacity that comfortably meets 180 MB per second of sustained write throughput while keeping the solution highly available and managed by Google Cloud.

Attach an individual persistent disk to each instance is incorrect because each virtual machine would have its own isolated file system so the service would not read and write to the same file system across instances.

Mount a Cloud Storage bucket on each instance using gcsfuse is incorrect because Cloud Storage is object storage and gcsfuse does not provide full POSIX file system semantics for stateful workloads and it is not suited to sustained high write throughput requirements like this.

Set up an NFS server on a Compute Engine VM backed by a large SSD persistent disk and mount it from all instances is incorrect because this is a self-managed design that introduces operational burden and potential single points of failure whereas the question asks for a managed and reliable solution.

When a question asks for a shared POSIX file system across multiple virtual machines and stresses managed and reliable, prefer Filestore. Avoid gcsfuse for stateful workloads and avoid self-managed NFS unless the question explicitly allows it.

Question 16

Which Google Cloud services should you use for high velocity time series ingestion, transactional user profiles and game state, and interactive analytics on 30 TB of historical events?

  • ✓ C. Use Cloud Bigtable for time series, use BigQuery for historical analytics, and use Cloud Spanner for transactions

The correct option is Use Cloud Bigtable for time series, use BigQuery for historical analytics, and use Cloud Spanner for transactions. This combination aligns each workload with the service designed for it and provides scalability, low latency, and interactive insights.

Cloud Bigtable is optimized for high velocity time series ingestion and retrieval. It provides very high write throughput with low latency and supports efficient schema designs for time ordered data, which makes it a strong fit for event streams and metrics.

BigQuery is built for interactive analytics over large datasets. It handles tens of terabytes and more with standard SQL, automatic scaling, and separation of storage and compute. A 30 TB historical events dataset is well suited for BigQuery where analysts can query it interactively without managing clusters.

Cloud Spanner offers strongly consistent relational transactions with horizontal scalability. It is a good choice for user profiles and game state where correctness, concurrency, and availability are critical.

Use Cloud Spanner for time series, use Cloud Spanner for transactions, and export to Cloud Storage for historical analytics is not a good fit because Spanner is not the most cost effective or performant choice for heavy time series ingestion compared to Bigtable. Exporting to Cloud Storage gives durable objects but not an analytics engine for interactive queries, so you would still need to load the data into a warehouse like BigQuery or run a processing framework, which this option does not include.

Use Cloud Pub/Sub for time series ingestion, use AlloyDB for transactions, and use Cloud Dataproc for historical analytics is not ideal because Pub/Sub is a messaging service rather than a storage or time series database. Dataproc is a managed Spark and Hadoop service that is better for batch processing and cluster based jobs and is not the best choice for fast interactive SQL over 30 TB when BigQuery is available. AlloyDB can serve transactional workloads, yet for globally scalable game state with strong consistency Spanner is typically a better match, and this option still misses the most suitable analytics service for the historical dataset.

Map each workload pattern to the service designed for it. Think Cloud Bigtable for high velocity time series, BigQuery for interactive analytics at scale, and Cloud Spanner for globally consistent transactions. Be cautious when options use Cloud Storage as the analytics layer or Cloud Pub/Sub as a database.

Question 17

Riverton Media is preparing to switch traffic to a new marketing site on Google Cloud. You created a managed instance group with autoscaling and attached it as a backend to an external HTTP(S) load balancer. After enabling the backend, you observe that the virtual machines are being recreated roughly every 90 seconds. The instances do not have public IP addresses, and you can successfully curl the service from an internal test host. What should you change to ensure the backend is configured correctly?

  • ✓ C. Create a firewall rule that allows traffic from Google health check IP ranges to the instance group on the configured health check ports

The correct option is Create a firewall rule that allows traffic from Google health check IP ranges to the instance group on the configured health check ports. This allows the external HTTP(S) load balancer health checkers to reach the VMs on their private IPs so the managed instance group stops autohealing and recreating instances.

Because the instances do not have public IP addresses, Google health checkers probe the backends from specific Google source IP ranges. If the VPC firewall does not permit those ranges to the health check port, the checks fail and the managed instance group considers the VMs unhealthy and recreates them repeatedly. Allowing the documented health checker ranges to the health check port resolves the issue and is the supported configuration.

Add a network tag that matches the load balancer name and create a rule that allows sources with that tag to reach the instances is incorrect because the external load balancer and its health checkers are not VMs in your project and cannot carry your network tags. You must allow the documented health checker source IP ranges instead.

Assign a public IP to each VM and open a firewall rule so the load balancer can reach the public addresses is incorrect because external HTTP(S) load balancers reach backends on their private addresses. Backends do not need public IPs and assigning them would not fix failed health checks through the VPC firewall.

Increase the autoscaler cool down period so instances are not replaced as often is incorrect because the recreations are driven by the managed instance group autohealing policy reacting to failed health checks. Autoscaler cool down affects scaling decisions rather than health based replacement.

When instances in a managed instance group keep getting recreated, think about autohealing from failing health checks. Verify a firewall rule that allows the Google health check IP ranges to the configured health check port on the backends.

Question 18

Which Google Cloud solutions let development VMs retain data across reboots and provide ongoing spend visibility without manual reporting? (Choose 2)

  • ✓ B. Export Cloud Billing data with labels to BigQuery and Looker Studio

  • ✓ D. Compute Engine with persistent disks

The correct options are Export Cloud Billing data with labels to BigQuery and Looker Studio and Compute Engine with persistent disks.

Export Cloud Billing data with labels to BigQuery and Looker Studio enables automatic export of detailed billing data into BigQuery where labels can be used to attribute costs by team, project, or environment. You can then build dashboards in BigQuery and Looker Studio that refresh from the dataset which gives ongoing spend visibility without having to assemble manual reports.

Compute Engine with persistent disks provides durable block storage that survives instance reboots and stop or start cycles. The data remains until you delete the disk, which is exactly what development virtual machines need when they must retain their state across reboots.

Local SSD on Compute Engine is ephemeral storage that is tied to the VM host and data is lost when the instance is stopped or terminated. While it can survive a soft reboot, it does not reliably meet the requirement to retain data across reboots and it provides no help with spend visibility.

Cloud Billing budgets and alerts helps you set thresholds and receive notifications when spending crosses those thresholds. It does not provide detailed ongoing visibility, attribution by labels, or dashboards without additional export and analysis, so it does not meet the requirement for ongoing visibility without manual reporting.

Map each requirement to a native capability. If the question asks for data to persist across VM restarts, think persistent disks. If it asks for continuous cost visibility with attribution and no manual work, think Billing export to BigQuery with dashboards in Looker Studio rather than alerts.

Question 19

A travel booking platform at Northstar Tickets processes card payments and wants to reduce the PCI DSS scope to the smallest footprint while keeping the ability to analyze purchase behavior and payment method trends. Which architectural approach should you adopt to satisfy these requirements?

  • ✓ C. Implement a tokenization service and persist only tokens in your systems

The correct option is Implement a tokenization service and persist only tokens in your systems.

This design replaces primary account numbers with tokens before data enters your applications, which means only the vault or payment gateway retains the sensitive data. Because your databases, logs, and analytics pipelines handle only tokens and derived attributes, they fall outside most PCI DSS controls which sharply reduces scope. You can still analyze purchase behavior and payment method trends by grouping and joining on consistent tokens that represent the same payment instrument without exposing the underlying cardholder data.

Export Cloud Logging to BigQuery and restrict auditor access using dataset ACLs and authorized views is not sufficient because controlling access to logs and query results does not remove cardholder data from your environment and it does not minimize which systems store or process it.

Place all components that handle cardholder data into a separate Google Cloud project does not reduce PCI scope because scope follows where cardholder data is stored, processed, or transmitted and connected systems may still be in scope. A project boundary is administrative and does not eliminate cardholder data from your application tier.

Create dedicated subnetworks and isolate the services that process cardholder data can help with segmentation, yet it does not remove systems from scope when they still handle the data. Without removing cardholder data from those services, they remain in scope.

Label every virtual machine that processes PCI data to simplify audit discovery is only an inventory aid and it does not change data flows or the compliance boundary, so it does not reduce scope.

When a question asks how to reduce PCI DSS scope, choose approaches that remove cardholder data from your systems such as tokenization or hosted payment pages, and be wary of answers that only add access controls or network segmentation.

Question 20

How should you configure BigQuery IAM so analysts can run queries in the project and only read data in their country’s dataset?

  • ✓ B. Grant bigquery.jobUser at the project to a global analysts group and bigquery.dataViewer only on each country dataset to its country group

The correct choice is Grant bigquery.jobUser at the project to a global analysts group and bigquery.dataViewer only on each country dataset to its country group.

This configuration separates the ability to run query jobs from the permission to read data. Granting project level jobUser lets analysts submit queries and manage their own jobs without granting access to any tables. Granting dataset level dataViewer only to the country specific groups ensures analysts can read tables and views only in their country’s dataset. This meets the requirement for running queries in the project while restricting data access to the appropriate dataset and it follows the principle of least privilege.

Grant bigquery.jobUser and bigquery.dataViewer at the project to a global analysts group is incorrect because giving dataViewer at the project grants read access to all datasets in the project. That would let analysts view every country’s data which violates the need to limit reads to only their country dataset.

Use one shared dataset with row level security by country and grant bigquery.jobUser to all analysts is incorrect because jobUser alone does not grant read access to data. In addition the requirement is to restrict access by dataset rather than rows within a single shared dataset. While row level security can filter rows it does not satisfy the stated dataset level isolation and this option would not give the needed dataset permissions.

Map permissions to the right resource level. Give project level permissions for running jobs and dataset level permissions for data access. Prioritize least privilege and check the resource scope of each role.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.