CNCF Exam Topics KCNA Practice Exams

Kubernetes KCNA Certification Exam Topics

If you want to get certified in the CNCF Kubernetes and Cloud Native Associate (KCNA) exam, you need to do more than just study. You need to practice by completing KCNA practice exams, reviewing cloud native sample questions, and spending time with a reliable KCNA certification exam simulator.

In this quick KCNA practice test tutorial, we will help you get started by providing a carefully written set of KCNA exam questions and answers. These questions mirror the tone and difficulty of the actual KCNA exam, giving you a clear sense of how prepared you are for the test.

KCNA Certified Associate Practice Questions

Study thoroughly, practice consistently, and gain hands-on familiarity with Kubernetes fundamentals, containers, open source tooling, and cloud native terminology. With the right preparation, you will be ready to pass the KCNA certification exam with confidence.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Certification Sample Questions

In a cloud native platform for a fintech startup called AtlasApps what practice should engineers avoid when defining how microservices communicate with each other?

  • ❏ A. Use Cloud Pub/Sub for decoupled asynchronous messaging

  • ❏ B. Rely on direct synchronous coupling between services

  • ❏ C. Use an API gateway for centralized ingress and routing

  • ❏ D. Adopt HTTP2 or gRPC for efficient service communication

When should you prefer managing replica sets with higher level controllers such as Deployments or StatefulSets rather than launching individual pods directly on cluster nodes?

  • ❏ A. GKE Autopilot

  • ❏ B. They guarantee that workloads will never experience downtime

  • ❏ C. They provide automated scaling self healing and coordinated rolling updates for pods

  • ❏ D. They update running container processes in place to apply new versions

What security risk arises from the default practice of keeping Kubernetes Secrets unencrypted in the API server backing datastore etcd?

  • ❏ A. Secrets are encrypted automatically when they are written to etcd

  • ❏ B. Cloud KMS

  • ❏ C. Only cluster administrators can access Secrets stored in etcd

  • ❏ D. Anyone with access to etcd or to the cluster can read or alter the Secrets stored there

In a cluster run by HarborTech, what responsibility does the control plane API server have when Custom Resource Definitions are introduced to the system?

  • ❏ A. Distributes CRD definitions to each node’s kubelet

  • ❏ B. Stores instances of custom resources in etcd through the API server

  • ❏ C. Automatically provisions a controller process for every new CRD

  • ❏ D. Executes custom controller logic when a custom resource is applied

When a new Pod is scheduled in a Kubernetes cluster which node component is responsible for obtaining the container images required for that Pod?

  • ❏ A. kube-apiserver

  • ❏ B. container runtime

  • ❏ C. kube-scheduler

  • ❏ D. kubelet

Which container runtime did the Kubernetes project first rely on to schedule and run containers?

  • ❏ A. containerd

  • ❏ B. gVisor

  • ❏ C. Docker Engine

  • ❏ D. CRI-O

You operate a Kubernetes cluster across several worker nodes and you need to schedule a monitoring agent pod on every node for visibility and metrics collection. Which Kubernetes resource type is most appropriate for this requirement?

  • ❏ A. StatefulSet

  • ❏ B. Job

  • ❏ C. Deployment

  • ❏ D. DaemonSet

You manage a Kubernetes cluster running mission critical services for a retail startup called Shoppivot and you must deploy updates while keeping the previous and new releases available at the same time to avoid outages and you also need to limit network access to the updated instances during the rollout. What should you use? (Choose 2)

  • ❏ A. Readiness probes on application containers

  • ❏ B. Resource quota limits in the namespace

  • ❏ C. NetworkPolicy to restrict traffic to the updated pods

  • ❏ D. Recreate deployment strategy

  • ❏ E. Rolling update deployment strategy

You operate a Kubernetes cluster for a digital invoicing platform at LedgerWorks and several microservices are critical for the system to function. Recently customers have experienced intermittent delays in transaction processing and the root cause is unclear. You want to gather detailed telemetry such as latency measurements request success and failure rates and a service dependency map to help diagnose and optimize performance. Which service mesh capability will best provide this level of observability?

  • ❏ A. OpenTelemetry tracing integration for end to end request traces

  • ❏ B. Sidecar proxies that emit per request metrics and construct service dependency graphs

  • ❏ C. Policy enforcement for access control between services

  • ❏ D. Service discovery that lists active services and their endpoints

A development team at RedRock maintains a Kubernetes cluster for their service. Which statements about the Kubernetes control plane are correct? (Choose 3)

  • ❏ A. Control plane nodes can be physical servers, virtual machines, or cloud compute instances

  • ❏ B. The kubelet process is a component of the control plane

  • ❏ C. Run three or seven control plane instances to achieve high availability

  • ❏ D. Distribute control plane instances across separate failure domains such as zones or racks to reduce correlated failures

What name is most commonly used to identify the kubectl configuration file in a user environment?

  • ❏ A. kubectl.yml

  • ❏ B. gcloud

  • ❏ C. kubecttle

  • ❏ D. kubeconfig

A mid sized software firm is planning to migrate some microservices to a serverless platform to improve scalability and lower operational burden. Which one of the following services would be the best candidate to migrate to a serverless environment?

  • ❏ A. A batch data transformation task that runs for up to six hours

  • ❏ B. A relational database that requires persistent storage and complex transactions

  • ❏ C. A user authentication API that handles sporadic bursts of very high traffic

  • ❏ D. A real time analytics dashboard that must maintain persistent WebSocket connections to users

You operate a Kubernetes cluster that hosts three namespaces named devbox, qa and production. Each namespace must be permitted to pull container images from its own private registry. What is the most efficient method to manage image pull secrets for those namespaces?

  • ❏ A. Create a custom service account in each namespace and associate the registry pull secret with it

  • ❏ B. Manually add the image pull secret to every pod specification in each namespace

  • ❏ C. Create a ClusterRole that contains the pull secret and bind that role across namespaces

  • ❏ D. Attach the registry pull secret to each namespace’s default service account

In a Kubernetes cluster what does a “Job” resource represent and how is it commonly used?

  • ❏ A. Cloud Run

  • ❏ B. A container image stored in a registry such as example.com

  • ❏ C. A Kubernetes object that runs finite batch workloads until a specified number of pod runs succeed

  • ❏ D. A resource used to define a long running scalable service like a web frontend

On each worker node the kubelet performs several tasks to maintain pod state and container lifecycle. Which of the following is not a responsibility of the kubelet?

  • ❏ A. Aggregates and exposes container logs for troubleshooting

  • ❏ B. Assigns and schedules Pods across the cluster nodes

  • ❏ C. Retrieves container images from registries and provides them to the runtime

  • ❏ D. Instructs the container runtime to start and stop container instances

You have prepared a Kubernetes manifest called frontend.yaml to roll out a group of frontend servers for your team at Clearwave Tech. Which kubectl command should you run to apply that manifest to your cluster?

  • ❏ A. kubectl create -f frontend.yaml

  • ❏ B. gcloud deploy apply frontend.yaml

  • ❏ C. kubectl apply -f frontend.yaml

  • ❏ D. kubectl replace -f frontend.yaml

How would you best describe the Twelve-Factor App approach when applied to cloud native software?

  • ❏ A. A checklist for selecting a cloud provider

  • ❏ B. A set of conventions for organizing mobile applications

  • ❏ C. A framework of twelve principles for designing scalable and maintainable cloud native applications

  • ❏ D. A collection of patterns for traditional monolithic architecture

In a containerized infrastructure what is the reason to prefer the Cluster Autoscaler rather than the Vertical Pod Autoscaler?

  • ❏ A. To provide ongoing metrics collection and alerting

  • ❏ B. To automatically adjust container CPU and memory requests

  • ❏ C. To automatically adjust the number of cluster nodes

  • ❏ D. To ensure a fixed count of pod replicas regardless of demand

In a GitOps process for continuous delivery and continuous integration, where should the authoritative record of application code configuration and the desired cluster state be maintained?

  • ❏ A. Kubernetes manifests applied to the cluster without being stored in version control

  • ❏ B. Cloud Build trigger configuration

  • ❏ C. A version controlled Git repository containing the declarative manifests and configuration

  • ❏ D. Argo CD application configuration

How would you define Prometheus and what typical role does it serve for monitoring and observability in cloud native environments at a company like example.com?

  • ❏ A. Argo CD

  • ❏ B. An open source monitoring and alerting toolkit that collects metrics from various services and offers querying and alerting

  • ❏ C. Google Cloud Storage

  • ❏ D. A programming language for Kubernetes development

You must make a containerized application running in a Pod reachable from outside the cluster. Which Kubernetes Service type allocates an external IP address automatically from the cloud provider’s pool of addresses?

  • ❏ A. NodePort

  • ❏ B. Ingress

  • ❏ C. ClusterIP

  • ❏ D. LoadBalancer

In a Kubernetes cluster which component on each worker node handles pulling container images and launching container processes?

  • ❏ A. scheduler

  • ❏ B. API server

  • ❏ C. container runtime

  • ❏ D. kubelet

In a distributed cloud native environment what technology is commonly used to link hundreds to thousands of services that run across multiple clusters while providing load balancing traffic management and observability?

  • ❏ A. Ingress Controller

  • ❏ B. StatefulSet

  • ❏ C. Service Mesh

  • ❏ D. Kubernetes Deployment

What is the main role of Kubernetes when operating containerized services within a cluster of machines?

  • ❏ A. Prometheus

  • ❏ B. Cloud Build

  • ❏ C. A platform for scheduling and managing containerized applications across a cluster of machines

  • ❏ D. Cloud Run

In a Kubernetes cluster how do pods communicate by default when no network policy resources are present?

  • ❏ A. Network firewall or VPC controls block pod traffic until explicit rules are added

  • ❏ B. Pods can both send and receive network traffic

  • ❏ C. Outbound connections are allowed but inbound connections are blocked

  • ❏ D. All inbound and outbound traffic is prevented for every pod

In a Kubernetes cluster which resource is the minimal deployable unit that an administrator can create and manage?

  • ❏ A. Node

  • ❏ B. API server

  • ❏ C. Kubelet

  • ❏ D. Pod

What benefits do engineering teams get from using a managed Kubernetes offering like Azure AKS or Google Kubernetes Engine rather than running Kubernetes themselves?

  • ❏ A. They always result in lower total cost of ownership compared to self managed clusters

  • ❏ B. They centralize cluster upkeep and handle routine maintenance tasks

  • ❏ C. They provide unrestricted access to the underlying nodes and control plane details

  • ❏ D. They automatically remove the need to connect to external logging and monitoring tools

At Nimbus Labs we operate a cloud native microservices platform and we plan to place an API gateway in front of our services. What is the primary benefit of adding an API gateway to this architecture?

  • ❏ A. Data caching

  • ❏ B. Cloud Load Balancing

  • ❏ C. Data transformation

  • ❏ D. Centralized client entry and request routing

You manage Kubernetes clusters for a software team at scrumtuous.com and you often operate across several namespaces. What is the most efficient method to set a default namespace so you do not have to include the namespace flag each time you run kubectl?

  • ❏ A. Export an environment variable in your shell session that records the current namespace

  • ❏ B. Attempt to use a kubectl switch-namespace command to change namespaces temporarily

  • ❏ C. Configure a kubeconfig context that includes the target namespace using kubectl config set-context

  • ❏ D. Add the –namespace flag to every kubectl invocation to specify the namespace explicitly

On a Kubernetes cluster running Istio which control plane component updates Envoy sidecar configuration to steer traffic between service revisions during canary deployments?

  • ❏ A. Traffic Director

  • ❏ B. Istio Mixer

  • ❏ C. Istiod

  • ❏ D. Istio Citadel

You operate a stateful service for HarborData on a Kubernetes cluster and the database requires strict single writer semantics. Which Kubernetes primitives should you use to ensure that only one database instance mounts and writes to the persistent storage at any time? (Choose 2)

  • ❏ A. Deployment with a persistent volume claim

  • ❏ B. PodDisruptionBudget

  • ❏ C. NetworkPolicy to limit access

  • ❏ D. StatefulSet using a persistent volume claim

  • ❏ E. DaemonSet with a persistent volume claim

A payments startup called NimbusPay operates a public API and they want a service level indicator that best reflects the experience of their customers. Which metric below most closely represents a user facing SLI?

  • ❏ A. memory utilization percentage

  • ❏ B. Cloud Monitoring uptime check success rate

  • ❏ C. client facing error rate

  • ❏ D. database replication lag

In a production Kubernetes cluster for a retail site at example.com what does the Taints and Tolerations feature mainly control?

  • ❏ A. Horizontal Pod Autoscaler

  • ❏ B. Node resource quota management

  • ❏ C. Container Registry

  • ❏ D. Control which pods may be scheduled onto specific nodes

Your team at Meridian Dataworks operates primarily on Kubernetes and needs to add serverless data transformation functions for event driven workloads. What is the best way to run serverless functions within the existing Kubernetes environment?

  • ❏ A. Schedule the function using a Kubernetes CronJob

  • ❏ B. Migrate the workloads to Cloud Run and stop using Kubernetes

  • ❏ C. Use Kubeless to run functions on the current Kubernetes cluster

  • ❏ D. Deploy the code as a Deployment and handle pod scaling manually

A fintech team at NovaStream runs Kubernetes and they need a Persistent Volume that one node can mount read write while many other nodes can mount it read only. Which Persistent Volume access mode provides that behavior?

  • ❏ A. ReadWriteMany

  • ❏ B. ReadOnlyMany

  • ❏ C. ReadWriteOncePod

  • ❏ D. ReadWriteOnce

Certification Sample Questions Answered

In a cloud native platform for a fintech startup called AtlasApps what practice should engineers avoid when defining how microservices communicate with each other?

  • ✓ B. Rely on direct synchronous coupling between services

The correct answer is Rely on direct synchronous coupling between services. Engineers should avoid designing microservices that depend on tight synchronous calls because that pattern creates brittle, hard to scale systems.

Rely on direct synchronous coupling between services leads to cascading failures and latency propagation when one service slows or fails. It also prevents teams from deploying and scaling services independently and makes fault isolation and testing much harder.

Prefer decoupling patterns such as asynchronous messaging, event driven flows, retries with backoff, and resilience patterns like circuit breakers and bulkheads. These approaches reduce the blast radius of failures and improve scalability without requiring tight coordination between services.

Use Cloud Pub/Sub for decoupled asynchronous messaging is not the correct choice to avoid because it represents a recommended way to decouple services rather than a harmful practice. Messaging systems are an example of how to avoid synchronous coupling.

Use an API gateway for centralized ingress and routing is useful for managing external traffic and enforcing cross cutting concerns, but it does not answer the question about which practice to avoid. An API gateway helps with routing and security but it does not by itself create the internal coupling described in the correct answer.

Adopt HTTP2 or gRPC for efficient service communication can improve performance and reduce overhead for service calls, but it does not eliminate the fundamental problem of synchronous tight coupling. Using efficient protocols is not the same as decoupling services and is not what you should avoid in this context.

When a question asks what practice to avoid look for the option that increases tight coupling or creates single points of failure. Eliminate answers that describe tools or patterns that reduce coupling or improve resilience.

When should you prefer managing replica sets with higher level controllers such as Deployments or StatefulSets rather than launching individual pods directly on cluster nodes?

  • ✓ C. They provide automated scaling self healing and coordinated rolling updates for pods

The correct option is They provide automated scaling self healing and coordinated rolling updates for pods.

Higher level controllers such as Deployments and StatefulSets manage ReplicaSets and they reconcile the desired state so they can automatically scale pods, replace failed pods, and perform coordinated rolling updates with controlled pod termination and startup to minimize disruption.

Deployments are the common choice for stateless workloads because they create and manage ReplicaSets for rolling updates and scaling. StatefulSets are used when pods need stable network identities or ordered deployment and updates.

GKE Autopilot is not an alternative controller. It is a GKE operating mode that manages cluster infrastructure and node configuration and it does not replace the need for controllers to manage pod lifecycles.

They guarantee that workloads will never experience downtime is incorrect because no controller can promise zero downtime. Rolling updates and proper probes reduce disruption, but you still need good configuration and architecture for high availability.

They update running container processes in place to apply new versions is incorrect because controllers typically create new pods with the updated images and then terminate the old pods. They do not patch processes inside an existing container in place.

On the exam look for answers that describe lifecycle management features such as scaling, self healing, and rolling updates. Be wary of absolute words like never or guarantee because they often indicate a wrong choice.

What security risk arises from the default practice of keeping Kubernetes Secrets unencrypted in the API server backing datastore etcd?

  • ✓ D. Anyone with access to etcd or to the cluster can read or alter the Secrets stored there

The correct answer is Anyone with access to etcd or to the cluster can read or alter the Secrets stored there.

This is correct because Kubernetes writes Secret objects to the API server backing datastore etcd in plaintext by default. That means anyone who can access etcd directly or who can access the cluster control plane or backups can read the Secret values and can also alter them if they have write access.

To mitigate this risk you enable encryption at rest for Secrets using the API server EncryptionConfiguration and you can use a key management service to protect the encryption keys. Using a KMS helps protect keys but the underlying risk arises from unencrypted data in etcd, so enabling encryption and controlling access to etcd and backups is important. You can use services such as Cloud KMS as part of a mitigation strategy.

Secrets are encrypted automatically when they are written to etcd is wrong because encryption at rest is not enabled by default and must be explicitly configured on the API server.

Cloud KMS is wrong as a choice for the security risk because it is a tool for managing encryption keys and not the vulnerability itself. It can be used to reduce the risk when properly integrated but it does not describe the risk of unencrypted Secrets in etcd.

Only cluster administrators can access Secrets stored in etcd is wrong because access is not limited to cluster admins. Anyone with access to etcd, the control plane, node snapshots, or backups can potentially read or modify Secrets if proper protections are not in place.

Remember that by default Kubernetes stores Secrets unencrypted in etcd and that exam answers usually reflect default behavior unless the question states a configuration change.

In a cluster run by HarborTech, what responsibility does the control plane API server have when Custom Resource Definitions are introduced to the system?

  • ✓ B. Stores instances of custom resources in etcd through the API server

The correct option is Stores instances of custom resources in etcd through the API server.

When a Custom Resource Definition is added the API server extends the Kubernetes API and accepts objects of the new kind. The API server persists custom resource objects in etcd and exposes them through the standard API endpoints so controllers and users can read and modify them.

Distributes CRD definitions to each node’s kubelet is incorrect because kubelets do not receive or host API type definitions. The API server provides discovery and stores the CRD metadata and schema centrally in etcd rather than pushing definitions to node agents.

Automatically provisions a controller process for every new CRD is incorrect because Kubernetes does not spawn controller processes automatically. Controllers are separate components or deployments and must be installed or run by operators or tooling if reconciliation logic is required.

Executes custom controller logic when a custom resource is applied is incorrect because the API server only stores and serves resource objects and enforces validation. Controller logic runs in controller processes that watch the API server and then reconcile state, and that execution is not performed by the API server itself.

When answering CRD questions remember to separate who stores and serves API objects from who runs reconciliation logic.

When a new Pod is scheduled in a Kubernetes cluster which node component is responsible for obtaining the container images required for that Pod?

  • ✓ D. kubelet

The correct option is kubelet.

The kubelet runs on each node and is the node agent that implements the PodSpec stored by the control plane. It watches the API server for Pods scheduled to its node and coordinates container lifecycle on that node. The kubelet uses the Container Runtime Interface to request image pulls and instructs the local runtime to download images and create containers.

The container runtime is the component that actually performs the network download and stores the image on disk, but it acts on requests from the kubelet and does not by itself decide or manage Pod lifecycle. That is why the runtime is not the correct answer in this question.

The kube-apiserver is a control plane component that exposes the Kubernetes API and stores cluster state. It does not run on every node and it does not perform image pulls for Pods.

The kube-scheduler chooses which node a Pod should run on but it is not involved in pulling container images or managing container lifecycle on the node.

When deciding which component handles a node task remember that control plane components schedule and store state while node components run Pods. The kubelet is the node agent that requests image pulls from the container runtime via CRI.

Which container runtime did the Kubernetes project first rely on to schedule and run containers?

  • ✓ C. Docker Engine

The correct option is Docker Engine.

Docker Engine was the container runtime that Kubernetes originally integrated with and used to schedule and run containers. The kubelet talked to Docker through a shim called dockershim which translated Kubernetes container operations into Docker API calls so Docker Engine was the first runtime relied upon by the project.

Dockershim was later deprecated and removed from the Kubernetes code base and Kubernetes moved to the Container Runtime Interface which lets runtimes such as containerd and CRI-O be used directly. This historical note explains why Docker Engine was first but why newer exam items often emphasize CRI and CRI-compatible runtimes.

containerd is a CRI-compatible runtime that grew out of the Docker ecosystem and it became widely used later. It was not the runtime Kubernetes first relied on because the original integration was with Docker Engine through dockershim.

gVisor is a sandboxing runtime from Google that focuses on additional isolation. It is an optional secure runtime and it was never the default or the original runtime for Kubernetes.

CRI-O implements the Container Runtime Interface to run OCI containers for Kubernetes without Docker. It was developed after the CRI design and so it was not the first runtime that Kubernetes relied on.

Remember that Kubernetes originally used Docker Engine via dockershim and that modern Kubernetes uses the Container Runtime Interface. When studying, focus on the CRI and how runtimes like containerd and CRI-O differ from the historical Docker integration.

You operate a Kubernetes cluster across several worker nodes and you need to schedule a monitoring agent pod on every node for visibility and metrics collection. Which Kubernetes resource type is most appropriate for this requirement?

  • ✓ D. DaemonSet

The correct option is DaemonSet.

DaemonSet ensures that a copy of a pod runs on every eligible node in the cluster which makes it the right choice for deploying monitoring agents and log collectors that must provide node level visibility.

When new nodes join the cluster a DaemonSet automatically schedules the pod onto them and you can use node selectors and tolerations to target or exclude specific nodes. DaemonSets are intended for long running single node agents rather than batch jobs or scalable stateless services.

StatefulSet is designed for stateful applications that require stable network identities and persistent storage and it manages ordered deployment and scaling. It does not ensure one pod per node and is not appropriate for node level monitoring agents.

Job runs pods to completion for batch processing tasks and then stops, so it is unsuitable for continuous monitoring agents that must run on every node.

Deployment manages replica sets for scalable stateless applications and ensures a desired number of replicas. A Deployment does not guarantee placement of one pod per node and replicas may be scheduled onto a subset of nodes.

When a question asks for a pod on every node look for DaemonSet and compare it to controllers that manage replica counts or batch runs.

You manage a Kubernetes cluster running mission critical services for a retail startup called Shoppivot and you must deploy updates while keeping the previous and new releases available at the same time to avoid outages and you also need to limit network access to the updated instances during the rollout. What should you use? (Choose 2)

  • ✓ C. NetworkPolicy to restrict traffic to the updated pods

  • ✓ E. Rolling update deployment strategy

The correct options are NetworkPolicy to restrict traffic to the updated pods and Rolling update deployment strategy.

A Rolling update deployment strategy updates pods gradually so that old and new releases can coexist during the rollout. This strategy lets you control how many pods are updated at once so you can maintain availability while new replicas start.

A NetworkPolicy to restrict traffic to the updated pods allows you to limit which sources can reach the new pods during the rollout. You can use pod selectors and ingress rules to stage traffic to the updated instances while keeping broader access to the previous release.

Readiness probes on application containers indicate when a pod is ready to receive traffic but they do not orchestrate a staged replacement of versions or enforce network restrictions. Readiness probes alone will not manage rollout concurrency or network access policies.

Resource quota limits in the namespace control consumption of CPU, memory and other resources across a namespace. They do not provide a mechanism to perform rolling updates or to restrict traffic to a subset of pods during deployment.

Recreate deployment strategy replaces all old pods by first terminating them and then creating new ones so it cannot keep both releases available at the same time. That strategy will cause downtime for workloads that require simultaneous old and new instances.

When you need zero downtime combine a Rolling update deployment strategy with a NetworkPolicy to stage traffic to new pods and restrict access during the rollout.

You operate a Kubernetes cluster for a digital invoicing platform at LedgerWorks and several microservices are critical for the system to function. Recently customers have experienced intermittent delays in transaction processing and the root cause is unclear. You want to gather detailed telemetry such as latency measurements request success and failure rates and a service dependency map to help diagnose and optimize performance. Which service mesh capability will best provide this level of observability?

  • ✓ B. Sidecar proxies that emit per request metrics and construct service dependency graphs

Sidecar proxies that emit per request metrics and construct service dependency graphs is correct because it directly provides the per request telemetry you need such as latency measurements, request success and failure rates, and an automatically constructed service dependency map.

Sidecar proxies run alongside each application pod and observe every incoming and outgoing request so they can emit fine grained metrics for each request and aggregate those into service level metrics. They can also propagate trace context and feed dependency information into the control plane so you get an end to end view of call paths and latency between services which is exactly what you need to diagnose intermittent delays.

OpenTelemetry tracing integration for end to end request traces is useful for detailed distributed traces and it helps show request flows, but tracing alone typically focuses on traces and sampled spans. It does not by itself provide the full set of per request aggregated metrics and mesh level dependency graphs unless it is combined with sidecar telemetry and metric exporters.

Policy enforcement for access control between services is about enforcing who can communicate with whom and it improves security and compliance. It does not gather latency metrics or build service dependency graphs so it does not address the observability requirement in this question.

Service discovery that lists active services and their endpoints helps locate services and endpoints at runtime and it is important for routing. It only tells you what is running and where it lives and it does not emit per request metrics or assemble dependency maps needed for deep performance diagnosis.

When a question asks for detailed observability look for answers that mention per request metrics, dependency graphs, or sidecar telemetry because those indicate the mesh capability that captures and exports the required data.

A development team at RedRock maintains a Kubernetes cluster for their service. Which statements about the Kubernetes control plane are correct? (Choose 3)

  • ✓ A. Control plane nodes can be physical servers, virtual machines, or cloud compute instances

  • ✓ C. Run three or seven control plane instances to achieve high availability

  • ✓ D. Distribute control plane instances across separate failure domains such as zones or racks to reduce correlated failures

The correct answers are Control plane nodes can be physical servers, virtual machines, or cloud compute instances, Run three or seven control plane instances to achieve high availability, and Distribute control plane instances across separate failure domains such as zones or racks to reduce correlated failures.

Control plane nodes can be physical servers, virtual machines, or cloud compute instances is correct because the control plane consists of processes that run on nodes and those nodes may be implemented on bare metal, on virtual machines, or as cloud compute instances. Operators commonly choose VMs or cloud instances for flexibility and they also use physical servers for on premise clusters.

Run three or seven control plane instances to achieve high availability is correct because etcd requires a quorum of members and running an odd number of control plane instances helps maintain quorum during failures. Three is a common minimal HA configuration and seven is used for larger clusters where more redundancy is needed.

Distribute control plane instances across separate failure domains such as zones or racks to reduce correlated failures is correct because placing control plane members in different availability zones or racks reduces the risk that a single infrastructure failure will take down a quorum. Spreading instances across failure domains improves resilience for the control plane and for etcd.

The kubelet process is a component of the control plane is incorrect because the kubelet runs on each node and acts as a node level agent that manages pods on that node. The kubelet is not part of the control plane and it does not provide cluster wide control plane functions.

When deciding answers remember to separate control plane components from node agents and to think about etcd quorum and failure domains when the question is about high availability.

What name is most commonly used to identify the kubectl configuration file in a user environment?

  • ✓ D. kubeconfig

kubeconfig is the most commonly used name to identify the kubectl configuration file in a user environment.

By convention kubectl looks for configuration in the file referenced by the KUBECONFIG environment variable or at the default path $HOME/.kube/config and that file is commonly called a kubeconfig.

A kubeconfig file contains entries for clusters, users, and contexts so kubectl can authenticate and select the correct cluster and user for commands.

kubectl.yml is incorrect because kubectl does not use that conventional filename for its configuration and YAML files with similar names are usually used for resource manifests rather than client configuration.

gcloud is incorrect because that name refers to the Google Cloud SDK command line tool and not to the kubectl configuration file.

kubecttle is incorrect because it is a misspelling of the kubectl command and it is not used to name the configuration file.

When you see questions about the kubectl configuration file look for the term kubeconfig and remember the default location is $HOME/.kube/config or the file pointed to by the KUBECONFIG environment variable.

A mid sized software firm is planning to migrate some microservices to a serverless platform to improve scalability and lower operational burden. Which one of the following services would be the best candidate to migrate to a serverless environment?

  • ✓ C. A user authentication API that handles sporadic bursts of very high traffic

A user authentication API that handles sporadic bursts of very high traffic is the correct choice.

The authentication API is typically stateless and request driven, and serverless platforms provide automatic scaling and pay per use which makes them well suited for sporadic bursty traffic and for reducing operational overhead.

A batch data transformation task that runs for up to six hours is not a good fit because most serverless function runtimes impose execution time limits and long running jobs are better handled by batch or container based services designed for long tasks.

A relational database that requires persistent storage and complex transactions is not appropriate to migrate to serverless functions because databases require persistent storage, stable connections, and transactional support and those needs are met by managed database services or dedicated instances rather than ephemeral functions.

A real time analytics dashboard that must maintain persistent WebSocket connections to users is not well suited because serverless functions are ephemeral and do not naturally hold long lived connections, so a stateful service or connection oriented infrastructure is a better choice for persistent WebSocket workloads.

Focus on whether a workload is stateless, short lived, and benefits from automatic scaling when choosing serverless. If it needs persistent storage or long lived connections think about other managed services instead.

You operate a Kubernetes cluster that hosts three namespaces named devbox, qa and production. Each namespace must be permitted to pull container images from its own private registry. What is the most efficient method to manage image pull secrets for those namespaces?

  • ✓ D. Attach the registry pull secret to each namespace’s default service account

Attach the registry pull secret to each namespace’s default service account is correct because it makes the secret available to all pods in that namespace without modifying individual pod specs.

Attaching the image pull secret to the namespace default service account stores the credentials on that account so pods that use the default service account automatically inherit the imagePullSecrets. This requires creating the secret once per namespace and then patching or updating the default service account, which is efficient and reduces configuration drift.

Create a custom service account in each namespace and associate the registry pull secret with it is less efficient because you must update every pod or deployment to use the custom service account or replace the default, which adds operational overhead.

Manually add the image pull secret to every pod specification in each namespace is inefficient and error prone because it requires editing all pod or workload manifests and repeating the change whenever credentials rotate.

Create a ClusterRole that contains the pull secret and bind that role across namespaces is incorrect because RBAC objects like ClusterRole do not contain or grant secrets for image pulls. Roles control permissions and do not distribute imagePullSecrets to pods.

When all pods in a namespace need the same registry credentials attach the image pull secret to the default service account for that namespace so pods inherit the secret without changing pod manifests.

In a Kubernetes cluster what does a “Job” resource represent and how is it commonly used?

  • ✓ C. A Kubernetes object that runs finite batch workloads until a specified number of pod runs succeed

The correct answer is A Kubernetes object that runs finite batch workloads until a specified number of pod runs succeed.

A Job creates one or more Pods that run to completion and it tracks successful pod terminations until the configured number of completions is reached. It is intended for finite batch tasks such as migrations, data processing, and one time work rather than for services that remain running.

The Job controller supports fields such as completions, parallelism and backoffLimit and it respects Pod restart policies like OnFailure or Never. When the required number of successful runs is achieved the Job is marked complete and it stops creating new Pods.

Cloud Run is a managed serverless container platform for running stateless services and it is not a Kubernetes object that manages finite batch pods. It is therefore not the correct choice for describing a Kubernetes Job.

A container image stored in a registry such as example.com describes an image artifact rather than a Kubernetes resource. A Job may use a container image to run its Pods but the image itself is not a Job.

A resource used to define a long running scalable service like a web frontend describes objects such as Deployments or Services and not a Job. A Job is for finite work and it does not manage ongoing scaled replicas in the same way a Deployment does.

When the question mentions run to completion or one off think Job. If it mentions long running or scalable think Deployment or another controller.

On each worker node the kubelet performs several tasks to maintain pod state and container lifecycle. Which of the following is not a responsibility of the kubelet?

  • ✓ B. Assigns and schedules Pods across the cluster nodes

Assigns and schedules Pods across the cluster nodes is the correct answer.

Scheduling is performed by the kube-scheduler which runs in the control plane. The kubelet runs on each worker node and implements the scheduler’s decisions by creating, monitoring, and reporting pod and container status. The kubelet does not assign pods across the cluster.

Aggregates and exposes container logs for troubleshooting is incorrect because the kubelet does not perform cluster wide log aggregation. The kubelet and container runtimes provide per node access to container logs and commands like kubectl logs can retrieve those logs, but centralized aggregation and exposure is handled by a logging agent or external system.

Retrieves container images from registries and provides them to the runtime is incorrect because the kubelet is responsible for ensuring images are available on the node. The kubelet uses the container runtime interface to instruct the runtime to pull images from registries so the runtime can run containers.

Instructs the container runtime to start and stop container instances is incorrect because the kubelet directly interacts with the container runtime to create, start, stop, and manage container lifecycles on the node. Managing container lifecycle is a core kubelet responsibility.

When you must distinguish responsibilities think in terms of control plane versus node. The kube-scheduler assigns pods and the kubelet runs and manages them on each node.

You have prepared a Kubernetes manifest called frontend.yaml to roll out a group of frontend servers for your team at Clearwave Tech. Which kubectl command should you run to apply that manifest to your cluster?

  • ✓ C. kubectl apply -f frontend.yaml

The correct option is kubectl apply -f frontend.yaml.

kubectl apply -f frontend.yaml is the declarative command that reads the manifest file and creates the resources if they do not exist or updates them if they do. The command sends the desired state to the Kubernetes API server so the cluster can reconcile actual state to match the provided manifest.

kubectl create -f frontend.yaml is incorrect because create only attempts to create resources and will fail if the resource already exists. It does not perform an update of existing resources the way apply does.

gcloud deploy apply frontend.yaml is incorrect because it is not a standard kubectl command and it is not the correct way to submit Kubernetes manifests to a cluster with kubectl. The gcloud CLI uses different commands for Google Cloud services and is not the direct replacement for kubectl apply.

kubectl replace -f frontend.yaml is incorrect because replace requires the target resource to already exist and it performs a full replacement. It will not create missing resources and it does not provide the same declarative merge behavior that make apply suitable for rollouts.

When asked how to apply a manifest choose kubectl apply -f <file> for declarative updates and remember that kubectl create and kubectl replace have different behaviors that can cause failures on existing or missing resources.

How would you best describe the Twelve-Factor App approach when applied to cloud native software?

  • ✓ C. A framework of twelve principles for designing scalable and maintainable cloud native applications

The correct option is A framework of twelve principles for designing scalable and maintainable cloud native applications.

The Twelve-Factor approach is a set of pragmatic principles originally articulated for building software as a service. It describes practices for codebase management, explicit dependency declaration, strict separation of config from code, treating backing services as attached resources, stateless processes, port binding, disposability for fast startup and graceful shutdown, and keeping development and production environments close. These principles make applications easier to scale and maintain in cloud native environments which is why the option is correct.

A checklist for selecting a cloud provider is incorrect because the Twelve-Factor App is about application design and runtime practices rather than criteria for choosing an infrastructure provider. It does not guide vendor selection.

A set of conventions for organizing mobile applications is incorrect because the principles target web and service oriented applications and deployment on cloud platforms. The guidance focuses on processes and deployment rather than mobile specific UI and platform concerns.

A collection of patterns for traditional monolithic architecture is incorrect because the Twelve-Factor principles encourage stateless, portable, and process oriented designs that map well to microservices and cloud deployment. It is not a catalog of patterns meant to reinforce traditional large monoliths.

Look for answers that describe principles for app behavior in the cloud rather than provider choice or a specific device type. The Twelve Factor is about how applications are built and run in cloud environments.

In a containerized infrastructure what is the reason to prefer the Cluster Autoscaler rather than the Vertical Pod Autoscaler?

  • ✓ C. To automatically adjust the number of cluster nodes

The correct option is To automatically adjust the number of cluster nodes.

To automatically adjust the number of cluster nodes describes the Cluster Autoscaler. The Cluster Autoscaler adds nodes when pods cannot be scheduled due to resource constraints and it removes nodes when they become underutilized and their pods can be moved. This autoscaler operates at the cluster node pool level and not at the pod resource or replica level.

To provide ongoing metrics collection and alerting is incorrect because autoscalers do not act as monitoring or alerting systems. Metrics and alerts are provided by monitoring tools such as Prometheus and Alertmanager and those systems feed data to autoscalers if needed.

To automatically adjust container CPU and memory requests is incorrect because changing pod resource requests is the role of the Vertical Pod Autoscaler. The Vertical Pod Autoscaler adjusts CPU and memory requests for containers and it does not add or remove cluster nodes.

To ensure a fixed count of pod replicas regardless of demand is incorrect because that describes a static deployment. Autoscalers change counts or resources based on demand and they do not enforce a fixed replica count regardless of load.

When a question refers to changing the size of the node pool choose Cluster Autoscaler. Remember that VPA adjusts resource requests and HPA adjusts pod replica counts.

In a GitOps process for continuous delivery and continuous integration, where should the authoritative record of application code configuration and the desired cluster state be maintained?

  • ✓ C. A version controlled Git repository containing the declarative manifests and configuration

A version controlled Git repository containing the declarative manifests and configuration is the authoritative record of application code configuration and the desired cluster state in a GitOps process.

The Git repository acts as the single source of truth and it provides history, auditability, and a clear change approval flow through commits and pull requests. Storing declarative manifests and configuration in version control enables reproducible environments, easy rollbacks, and automated reconciliation by GitOps operators.

In a GitOps workflow the repository holds the desired state and continuous delivery agents reconcile the cluster to match that state. The repository also integrates with CI pipelines so that built artifacts and configuration changes are coordinated and traceable back to specific commits.

Kubernetes manifests applied to the cluster without being stored in version control is incorrect because applying manifests only to the cluster does not provide a versioned, auditable source of truth. Without the repository you lose repeatability and a clear history of changes.

Cloud Build trigger configuration is incorrect because build triggers are part of the CI execution and they do not represent the declarative desired state of the cluster. They may start pipelines but they are not the authoritative record of manifests and configuration.

Argo CD application configuration is incorrect as the authoritative record because Argo CD is a synchronization tool that uses a source repository as the truth. Argo CD application objects can describe how to sync from Git, but the recommended GitOps pattern keeps the declarative manifests in the version controlled repository as the source of truth.

When you see the phrase authoritative record or single source of truth in a GitOps question choose the option that points to a version controlled Git repository holding declarative manifests.

How would you define Prometheus and what typical role does it serve for monitoring and observability in cloud native environments at a company like example.com?

  • ✓ B. An open source monitoring and alerting toolkit that collects metrics from various services and offers querying and alerting

An open source monitoring and alerting toolkit that collects metrics from various services and offers querying and alerting is the correct option.

Prometheus collects time series metrics from instrumented applications and exporters using a pull model and it stores those metrics so you can run ad hoc and continuous queries. It provides a powerful query language called PromQL that you can use to build dashboards and drive alerting rules, and it integrates with Alertmanager to manage notification workflows.

In cloud native environments at a company like example.com Prometheus typically serves as the central metrics store for service health and performance monitoring. Teams use it to define alerts, drive dashboards in tools like Grafana, and support SLO and capacity planning work.

Argo CD is incorrect because it is a GitOps continuous delivery tool for deploying applications to Kubernetes and it does not provide Prometheus style metric collection or alerting.

Google Cloud Storage is incorrect because it is an object storage service and it does not perform monitoring or alerting functions.

A programming language for Kubernetes development is incorrect because Prometheus is a monitoring system and not a programming language.

When a question asks about a monitoring system watch for words like metrics, querying, and alerting rather than deployment or storage features.

You must make a containerized application running in a Pod reachable from outside the cluster. Which Kubernetes Service type allocates an external IP address automatically from the cloud provider’s pool of addresses?

  • ✓ D. LoadBalancer

The correct option is LoadBalancer.

A Service of type LoadBalancer instructs the cloud provider integration to provision an external load balancer and it automatically allocates an external IP address from the provider’s pool and routes traffic to the Service endpoints that back the Pod.

NodePort exposes a static port on each cluster node so clients can reach pods by contacting node IPs and the assigned port. It does not request or receive an external cloud IP automatically and typically requires an external load balancer or other network configuration to be reachable from outside the cluster.

Ingress is an API object for HTTP and HTTPS routing and it is not itself a Service type. An Ingress requires an Ingress Controller to implement the rules and that controller may be exposed via a LoadBalancer or other mechanism. The Ingress resource alone does not automatically allocate a cloud provider external IP.

ClusterIP is the default Service type and it provides internal cluster connectivity only. It does not expose the Service outside the cluster and it does not allocate any external IP address.

When a question asks which Service type automatically gets an external cloud IP look for LoadBalancer as the Service that provisions a cloud load balancer and obtains the address.

In a Kubernetes cluster which component on each worker node handles pulling container images and launching container processes?

  • ✓ C. container runtime

The correct answer is container runtime.

container runtime on each worker node is responsible for pulling container images and starting the container processes. The runtime implements the Container Runtime Interface and performs low level functions such as image management, container creation, and process isolation. Common examples are containerd and CRI-O and they run on the node to manage containers when the kubelet requests actions.

kubelet runs on every node and acts as the node agent that watches the API server and enforces pod specifications. It does not itself perform the low level image pulls and process creation. Instead kubelet instructs the container runtime to carry out those actions and then monitors the container state.

scheduler runs in the control plane and decides which node should host a pod based on constraints and resource availability. It does not run on worker nodes and it does not perform image pulls or start processes on the node.

API server is the central control plane component that exposes the Kubernetes API and stores cluster state. It handles API requests and validation but it does not pull container images or launch processes on worker nodes.

Remember that the kubelet coordinates pods while the container runtime actually pulls images and runs container processes. When in doubt pick the component that performs image management and container execution on the node.

In a distributed cloud native environment what technology is commonly used to link hundreds to thousands of services that run across multiple clusters while providing load balancing traffic management and observability?

  • ✓ C. Service Mesh

Service Mesh is correct because it is the technology specifically designed to connect and manage communication between hundreds or thousands of services across clusters while providing load balancing, traffic management and observability.

This approach deploys lightweight proxies alongside application containers to handle east west traffic and to collect telemetry without changing application code. A control plane programs those proxies and enables features such as fine grained routing, retries, circuit breaking, mutual TLS and distributed tracing across multiple clusters which together provide the traffic management and observability described in the question.

Ingress Controller is focused on north south traffic entering a cluster and on routing external requests to services inside a cluster, and it does not provide the mesh style sidecar proxies or the full interservice observability and cross cluster traffic control that a service mesh provides.

StatefulSet is a workload API used to manage stateful applications that require stable network identities and persistent storage, and it is not a networking layer for service to service traffic management or observability.

Kubernetes Deployment manages rollout, scaling and updates for stateless pods, and it does not provide advanced traffic routing, distributed telemetry or security features for interservice communication that are offered by a service mesh.

When a question mentions linking many services across clusters with observability and advanced traffic control think service mesh and look for features like sidecar proxies, a control plane and multi cluster support.

What is the main role of Kubernetes when operating containerized services within a cluster of machines?

  • ✓ C. A platform for scheduling and managing containerized applications across a cluster of machines

A platform for scheduling and managing containerized applications across a cluster of machines is correct. Kubernetes is the system that provides this scheduling and management functionality across a set of machines.

Kubernetes places containers on appropriate nodes based on resource needs and policies, maintains the declared desired state for deployments, scales workloads up and down, and performs health checks and restarts to provide resilience. It also provides service discovery and load balancing to connect containers across the cluster, which matches the description in the correct option.

Prometheus is incorrect because it is a monitoring and alerting toolkit and not a scheduler or orchestrator for running containers across a cluster.

Cloud Build is incorrect because it is a continuous integration and delivery service that builds and tests code and does not manage container scheduling across cluster nodes.

Cloud Run is incorrect because it is a managed serverless platform for running containers and focuses on serving containerized services without managing a user visible cluster scheduler like Kubernetes.

When a question mentions scheduling, scaling, and managing containers across machines think Kubernetes and not monitoring or CI/CD tools.

In a Kubernetes cluster how do pods communicate by default when no network policy resources are present?

  • ✓ B. Pods can both send and receive network traffic

Pods can both send and receive network traffic is correct.

Kubernetes provides a flat pod network where each pod gets its own IP and pods can reach other pods by default when no Network firewall or VPC controls block pod traffic until explicit rules are added or NetworkPolicy resources are present to restrict traffic.

NetworkPolicies are used to restrict traffic and they are not enforced unless a policy exists and the cluster network plugin supports the policies. In other words traffic is allowed by default and rules must be added to deny or limit connectivity.

Network firewall or VPC controls block pod traffic until explicit rules are added is incorrect because Kubernetes itself does not block inter‑pod traffic by default. Cloud provider firewalls or node security groups can affect traffic at the node level but that is separate from the Kubernetes default pod networking model.

Outbound connections are allowed but inbound connections are blocked is incorrect because there is no built in asymmetric block like that for pods. By default pods can both initiate and receive connections from other pods unless a NetworkPolicy restricts them.

All inbound and outbound traffic is prevented for every pod is incorrect because the default policy is permissive and not deny all. Traffic is only restricted when you create network policies or when external network controls block it.

Remember that Kubernetes networking is permissive by default and that NetworkPolicy objects are required to restrict pod to pod communication.

In a Kubernetes cluster which resource is the minimal deployable unit that an administrator can create and manage?

  • ✓ D. Pod

The correct answer is Pod.

Pod is the smallest and simplest Kubernetes object that an administrator can create and manage to run containerized applications. A Pod can contain one or more containers that share the same network namespace and storage and it is the unit that the scheduler places onto a node.

Node is a worker machine in the cluster that hosts Pod*s and it is not the minimal deployable unit. You do not create a Node to run containers in the same way you create a *Pod.

API server is a control plane component that exposes the Kubernetes API and it validates and processes object definitions. It is not an object you deploy to run application containers.

Kubelet is the agent that runs on each node and ensures that containers described in a Pod are running. It is an operational component and not the deployable resource you create for your workload.

When asked about the minimal deployable unit think Pod and remember that pods hold one or more containers and are the objects you create and manage directly for workloads.

What benefits do engineering teams get from using a managed Kubernetes offering like Azure AKS or Google Kubernetes Engine rather than running Kubernetes themselves?

  • ✓ B. They centralize cluster upkeep and handle routine maintenance tasks

They centralize cluster upkeep and handle routine maintenance tasks is correct. Managed Kubernetes offerings centralize responsibilities such as control plane management, upgrades, security patching and other routine upkeep so engineering teams can spend more of their time on application development.

Managed services typically run and patch the control plane, offer automated or simplified node upgrades, provide integration with cloud networking and identity, and surface maintenance events through the provider console or APIs. These features reduce the operational burden on engineering teams and standardize lifecycle operations across clusters.

They always result in lower total cost of ownership compared to self managed clusters is incorrect because cost outcomes depend on many factors. Managed services reduce operational overhead but they also introduce provider charges and the net TCO depends on scale, team skills and workload characteristics.

They provide unrestricted access to the underlying nodes and control plane details is incorrect because providers commonly restrict control plane access and limit low level control for reliability and security. You may get node pool access or options for privileged workloads, but direct unrestricted control plane access is not typically offered.

They automatically remove the need to connect to external logging and monitoring tools is incorrect because managed offerings often include built in telemetry and native integrations, but teams still need to configure logging, monitoring and alerting. Many organizations connect additional or external observability tools for longer retention, advanced analysis and cross service correlation.

When answering, look for options that describe reduced operational work or routine maintenance rather than absolute claims about cost or full control. Managed services are about shifting operational burden not removing visibility or choice.

At Nimbus Labs we operate a cloud native microservices platform and we plan to place an API gateway in front of our services. What is the primary benefit of adding an API gateway to this architecture?

  • ✓ D. Centralized client entry and request routing

Centralized client entry and request routing is correct because an API gateway gives you a single, unified entry point for clients and it routes incoming requests to the appropriate backend services.

An API gateway implements cross cutting concerns such as authentication and authorization, SSL termination, rate limiting, and request routing so that individual microservices do not need to duplicate these responsibilities. By acting as the central entry point the gateway decouples clients from the internal service topology and it can also provide API versioning and request aggregation to simplify client interactions.

Data caching is not the primary benefit. A gateway can optionally perform caching but caching is a specific feature and not the main architectural reason to add a gateway.

Cloud Load Balancing is not correct because load balancing is focused on distributing traffic across instances and is often handled by cloud load balancers or service mesh components. The gateway focuses on routing and API concerns rather than replacing a dedicated load balancer.

Data transformation is not the primary benefit even though some gateways can transform payloads or mediate protocols. Transformation is a useful feature but it is a secondary capability compared to providing a centralized client entry and routing layer.

When an exam question asks about the main role of an API gateway look for answers that mention a single entry point or request routing and not individual features like caching or transformation.

You manage Kubernetes clusters for a software team at scrumtuous.com and you often operate across several namespaces. What is the most efficient method to set a default namespace so you do not have to include the namespace flag each time you run kubectl?

  • ✓ C. Configure a kubeconfig context that includes the target namespace using kubectl config set-context

The correct option is Configure a kubeconfig context that includes the target namespace using kubectl config set-context.

Configure a kubeconfig context that includes the target namespace using kubectl config set-context sets a default namespace inside the kubeconfig context so kubectl uses that namespace when you do not pass a namespace flag. This persists across shell sessions and makes it easy to switch workspaces by changing contexts rather than repeating the namespace on every command.

You apply this by setting the namespace on a context with the set-context action and then making that context active. Once the context is active kubectl will use the configured namespace by default and you will not need to add the namespace flag to each command.

Export an environment variable in your shell session that records the current namespace is incorrect because kubectl does not read a standard environment variable for the active namespace. You can create aliases or wrappers, but that is not a built in, reliable kubectl mechanism and it will not automatically work across different shells unless you persist it in your shell startup files.

Attempt to use a kubectl switch-namespace command to change namespaces temporarily is incorrect because there is no built in kubectl command with that name. There are third party utilities such as kubens that help switch namespaces, but they are not part of kubectl itself and so are not the expected answer on the exam.

Add the –namespace flag to every kubectl invocation to specify the namespace explicitly is technically valid but it is inefficient and error prone when you work across many commands and contexts. The question asks for the most efficient method and setting the namespace on the kubeconfig context is the preferred solution.

Use kubectl config set-context to configure a namespace on a context and then switch contexts with kubectl config use-context to avoid repeatedly typing the namespace flag.

On a Kubernetes cluster running Istio which control plane component updates Envoy sidecar configuration to steer traffic between service revisions during canary deployments?

  • ✓ C. Istiod

The correct option is Istiod.

Istiod is the Istio control plane component that manages configuration distribution and service discovery for Envoy sidecars. It programs Envoy proxies with routing rules and traffic shifting so you can steer traffic between service revisions during canary deployments.

Traffic Director is a Google Cloud managed service for controlling proxy based traffic in GCP environments but it is not part of the Istio control plane and it does not perform Istio specific routing for canaries.

Istio Mixer was the policy and telemetry component in older Istio releases but it did not program Envoy routing and it has been deprecated and removed in recent Istio versions. That is why it is not responsible for steering traffic on modern Istio installations.

Istio Citadel provided certificate authority and workload identity functions for mTLS. Its responsibilities were merged into Istiod and it did not handle routing rules, so it is not used to steer traffic during canary deployments.

When a question asks which component pushes configuration to Envoy focus on the component that serves xDS and manages routing instead of components that handle telemetry or certificates.

You operate a stateful service for HarborData on a Kubernetes cluster and the database requires strict single writer semantics. Which Kubernetes primitives should you use to ensure that only one database instance mounts and writes to the persistent storage at any time? (Choose 2)

  • ✓ B. PodDisruptionBudget

  • ✓ D. StatefulSet using a persistent volume claim

The correct options are PodDisruptionBudget and StatefulSet using a persistent volume claim.

A StatefulSet using a persistent volume claim gives each pod a stable identity and a stable, dedicated PVC binding. When you use a PVC with an access mode such as ReadWriteOnce and run a single replica the volume can only be mounted read write by one node and that enforces single writer semantics. The stable naming and stable volume binding of a StatefulSet also prevents PVC ownership from moving unexpectedly between interchangeable pods.

A PodDisruptionBudget complements the StatefulSet and PVC access mode by protecting the writer pod from voluntary disruptions during maintenance. A PDB prevents evictions that could create a window where another pod might try to start and compete for the volume, and that helps maintain the single writer guarantee during cluster operations.

Deployment with a persistent volume claim is incorrect because Deployments treat pods as interchangeable replicas and do not provide stable, per-pod PVC bindings. That means a Deployment can more easily lead to multiple pods attempting to mount or contend for storage or to PVC to pod mismatches.

NetworkPolicy to limit access is incorrect because NetworkPolicy controls network traffic only. It cannot prevent multiple pods or nodes from mounting the same persistent volume and it does not enforce storage access modes or pod identity.

DaemonSet with a persistent volume claim is incorrect because a DaemonSet runs a pod on every node and would therefore try to mount the volume on many nodes. That behavior conflicts with single writer requirements and is not appropriate for a single instance database.

When the question asks about single writer semantics focus on storage access modes and pod identity. Use ReadWriteOnce with a StatefulSet limited to one replica and protect the writer with a PodDisruptionBudget.

A payments startup called NimbusPay operates a public API and they want a service level indicator that best reflects the experience of their customers. Which metric below most closely represents a user facing SLI?

  • ✓ C. client facing error rate

The correct answer is client facing error rate.

A user facing SLI should measure what customers actually see and experience. The client facing error rate reflects failed API requests and payment errors and it therefore maps directly to end user impact. That makes it easy to convert into an SLO and to use for incident prioritization and rollback decisions.

memory utilization percentage is an internal resource metric and it does not directly indicate whether customers are succeeding or failing when they use the API. High memory use may correlate with problems but it is not a user facing SLI.

Cloud Monitoring uptime check success rate is a synthetic probe and it can be useful as a health signal. It is not as precise a measure of real customer experience as actual client error rates because probes may not exercise the same code paths or workloads as real users.

database replication lag is an internal operational metric that can affect correctness and latency in some cases. It is not a direct measure of user observed errors and so it is not the best choice for a user facing SLI.

Pick metrics that reflect what a real customer experiences. Focus on end to end outcomes like successful transactions and error rates rather than internal resource or infrastructure counters.

In a production Kubernetes cluster for a retail site at example.com what does the Taints and Tolerations feature mainly control?

  • ✓ D. Control which pods may be scheduled onto specific nodes

The correct answer is Control which pods may be scheduled onto specific nodes.

Taints are applied to nodes to mark them as unsuitable for most pods and tolerations are placed on pods to allow them to be scheduled onto tainted nodes. This combination lets cluster operators isolate or reserve nodes for special workloads and ensures that only pods that explicitly tolerate a node will be scheduled there.

The taint and toleration mechanism is evaluated by the scheduler during placement and it is focused on node level placement rules rather than on scaling, resource quota enforcement, or image storage.

Horizontal Pod Autoscaler is incorrect because that feature adjusts the number of pod replicas based on metrics and it does not control which nodes pods run on.

Node resource quota management is incorrect because quotas limit aggregate resource consumption and enforce limits across namespaces or nodes and they do not selectively prevent pods from being scheduled onto specific nodes in the same way taints do.

Container Registry is incorrect because a registry is used to store and retrieve container images and it has no role in scheduling decisions.

When a question mentions node level placement look for words like taint or toleration and connect them to scheduling restrictions rather than to scaling or image management.

Your team at Meridian Dataworks operates primarily on Kubernetes and needs to add serverless data transformation functions for event driven workloads. What is the best way to run serverless functions within the existing Kubernetes environment?

  • ✓ C. Use Kubeless to run functions on the current Kubernetes cluster

The correct option is Use Kubeless to run functions on the current Kubernetes cluster.

Kubeless is a Kubernetes native serverless framework that lets you deploy functions as Kubernetes resources and reuse existing cluster primitives. It integrates with event sources and the cluster scheduler so functions can be invoked on demand and scaled without moving workloads out of the cluster.

Note that Kubeless has seen less active development than some newer projects such as Knative and KEDA so newer exams and current production guidance may prefer those alternatives. The exam question as given still expects a Kubernetes native function framework and that makes Use Kubeless to run functions on the current Kubernetes cluster the best match.

Schedule the function using a Kubernetes CronJob is incorrect because CronJobs are for scheduled, time based tasks rather than event driven, on demand serverless functions and they do not provide automatic event based scaling for arbitrary event sources.

Migrate the workloads to Cloud Run and stop using Kubernetes is incorrect because that requires moving workloads to a managed service outside the existing cluster. The question asks to add serverless functions within the current Kubernetes environment so migrating off cluster does not meet the requirement.

Deploy the code as a Deployment and handle pod scaling manually is incorrect because manual scaling and long running deployments are not serverless. Deployments do not provide built in, event driven invocation and autoscaling of short lived functions without additional tooling.

When a question asks to add serverless functionality inside an existing Kubernetes cluster look for answers that mention Kubernetes native serverless frameworks or CRD based solutions and not options that require migration or manual scaling. Confirm the answer runs in the cluster and supports event driven scaling

A fintech team at NovaStream runs Kubernetes and they need a Persistent Volume that one node can mount read write while many other nodes can mount it read only. Which Persistent Volume access mode provides that behavior?

  • ✓ D. ReadWriteOnce

The correct option is ReadWriteOnce.

ReadWriteOnce means the Persistent Volume can be mounted read write by a single node and it prevents concurrent read write mounts from multiple nodes. That fits the requirement of one node having read write access while other nodes must not be able to write at the same time.

Whether additional nodes can also mount the same volume read only depends on the storage backend, but the access mode that enforces single-node write is ReadWriteOnce.

ReadWriteMany is incorrect because that mode allows many nodes to mount the volume read write simultaneously, so it does not restrict write access to a single node.

ReadOnlyMany is incorrect because that mode only allows multiple nodes to mount the volume read only and it cannot provide a read write mount for the single node.

ReadWriteOncePod is incorrect because it is a different mode that constrains write access to a single Pod rather than expressing the node level write restriction described in this question.

Remember the access mode meanings. ReadWriteOnce limits write to one node, ReadWriteMany allows multi node write, and ReadOnlyMany allows multi node read only. Focus on the access mode semantics rather than specific driver behavior during the exam.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.