Kubernetes Cloud Native Associate Exam Dumps and Braindumps for KCNA
All exam questions come from my KCNA Udemy Course and certificationexams.pro
Kubernetes KCNA Certification Exam Topics
Despite the title of this article, this is not a KCNA exam braindump in the traditional sense.
I do not believe in cheating.
Traditionally, the term braindump referred to someone taking an exam, memorizing the questions, and sharing them online for others to use.
That practice is unethical and violates the certification agreement. It offers no integrity, no genuine learning, and no professional growth.
This is not a Kubernetes Certification exam dump.
All of these questions come from my KCNA study materials and from the certificationexams.pro website, which offers hundreds of free KCNA practice questions.
Real KCNA Sample Questions
Each question has been carefully written to align with the official Kubernetes and Cloud Native Associate exam objectives. They reflect the tone, logic, and technical depth of real Kubernetes-focused scenarios, but none are copied from the actual test.
Every question is designed to help you learn, reason, and study KCNA certification exam concepts such as container orchestration, cloud native architecture, observability, microservices, and the Kubernetes ecosystem.
KCNA Certified Associate Practice Questions
If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real KCNA exam but also gain the foundational knowledge needed to work confidently with Kubernetes and cloud native technologies.
So if you want to call this your KCNA exam dump, that is fine, but remember that every question here is built to teach the KCNA exam objectives, not to cheat.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Certification Exam Simulator Questions
What is the main purpose of a Kubernetes “Node” within a cluster environment?
-
❏ A. Schedule pods onto machines based on resource requests
-
❏ B. Host and run containerized workloads
-
❏ C. Offer long term persistent storage for pods
-
❏ D. Handle cluster wide logging and monitoring
You contribute to an Open Cloud Foundation project and you have been asked to evaluate a proposal that would introduce significant architectural changes to the codebase. According to community governance best practices what should you do first?
-
❏ A. Apply the changes in a private fork and merge them without public review
-
❏ B. Deploy the modifications behind a feature flag and run internal validation
-
❏ C. Initiate a public Request For Comments process on the project mailing list
-
❏ D. Refer the matter immediately to the project technical oversight committee for a ruling
When you are launching a cloud native service to a Kubernetes cluster for a small software firm which practice should be avoided?
-
❏ A. Automating builds and rollouts with Cloud Build
-
❏ B. Adopting a GitOps model for declarative application delivery
-
❏ C. Editing live Kubernetes objects in the cluster without version control
-
❏ D. Using Kubernetes native resources such as Deployments and StatefulSets for workload management
Which of the following is not an advantage of employing a pull based monitoring model like Prometheus for observing an application?
-
❏ A. Provides a single authoritative list of endpoints to monitor
-
❏ B. Reduces the chance of the metrics backend being overloaded by many incoming connections
-
❏ C. Makes setting up an event driven pipeline for logs and traces easier
-
❏ D. Makes it straightforward to discover when a target is not responding to scrapes
On a Kubernetes cluster that spans several nodes at HexaCloud you observe a subset of microservices showing increased response times after you have ruled out network congestion and application bugs. What other likely cause could explain these latency symptoms?
-
❏ A. Some pods are stuck in the Pending state
-
❏ B. Cloud Load Balancer misconfiguration
-
❏ C. A StatefulSet was deployed without an associated Service
-
❏ D. The kube-scheduler is unevenly assigning workloads across nodes
When running services that need stable network identities and durable storage which Kubernetes controller manages pods so they retain their identity and attached storage across restarts?
-
❏ A. Deployment
-
❏ B. PersistentVolumeClaim
-
❏ C. StatefulSet
-
❏ D. DaemonSet
What advantages result from adopting a plugin model for Kubernetes subsystems such as the Container Storage Interface the Container Networking Interface and the Service Mesh Interface?
-
❏ A. Creates vendor lock in so clusters can only use supported providers
-
❏ B. Makes it necessary to use managed cloud services such as GKE add ons for proper plugin operation
-
❏ C. Adds communication overhead between plugins and backends which can increase request latency
-
❏ D. Allows operators to pick different storage and networking implementations and avoid vendor lock in
On a Kubernetes worker node what responsibility does the kubelet have in relation to the node’s container runtime?
-
❏ A. To schedule Pods onto the node
-
❏ B. To route service traffic on the node using kube-proxy
-
❏ C. To act as the node agent that manages container lifecycles and communicates with the container runtime
-
❏ D. To authenticate node metrics to Cloud Monitoring for collection
At Atlas Systems you are setting up user authentication for a Kubernetes cluster and you are evaluating several approaches. Why is relying on static password or token files typically discouraged for authenticating users to the cluster?
-
❏ A. Static token or password files are stored in readable form and can be copied and shared, weakening access controls
-
❏ B. Static files by themselves cannot enforce multifactor authentication
-
❏ C. Using static credential files requires manual updates when user access changes which increases administrative overhead
-
❏ D. Static credential files do not integrate with external identity systems such as Cloud IAM or OpenID Connect
What is the usual cadence for publishing new Kubernetes minor releases and how frequently are patches issued?
-
❏ A. Once each week
-
❏ B. Two releases each month
-
❏ C. Roughly every 120 days
-
❏ D. Every six months
Which Kubernetes ingress implementation uses Envoy as its data plane to forward incoming HTTP and HTTPS requests to backend services?
-
❏ A. HAProxy Ingress Controller
-
❏ B. Kong Ingress Controller
-
❏ C. Traefik Ingress Controller
-
❏ D. Istio Ingress Gateway
Acme Cloud operates a stateful service that needs durable storage in a Kubernetes cluster, and the team wants provisioning to happen automatically. Which Kubernetes resource should they create to request dynamically provisioned storage?
-
❏ A. StorageClass
-
❏ B. ConfigMap
-
❏ C. Filestore
-
❏ D. PersistentVolumeClaim
Your team operates a Kubernetes cluster that hosts a microservice which exports custom health metrics to Prometheus. Which component should you use to generate alerts from those metrics?
-
❏ A. Grafana
-
❏ B. Cloud Monitoring
-
❏ C. Fluentd
-
❏ D. Alertmanager
NimbusTek enforces tight runtime restrictions. You must make sure that no container in any pod can obtain Linux capabilities that allow root level network operations. Which Kubernetes mechanisms should you configure? (Choose 2)
-
❏ A. NetworkPolicy
-
❏ B. Pod security policy
-
❏ C. GKE Workload Identity
-
❏ D. Role Based Access Control
-
❏ E. Security context
You are a release engineer working on a cloud native project under the governance of the Cloud Native Computing Foundation and the community is updating its governance model. Which of the following is not a typical component of CNCF style open source project governance?
-
❏ A. Contributor License Agreements
-
❏ B. Proprietary licensing model
-
❏ C. Technical Steering Committee
-
❏ D. Maintainer roles and responsibilities
-
❏ E. Code of Conduct
What are two ways to create pods on a worker node by providing a YAML or JSON manifest file? (Choose 2)
-
❏ A. Copy the manifest into the kube scheduler configuration folder
-
❏ B. Run kubectl create or kubectl apply with the manifest file against the cluster API
-
❏ C. Place the pod manifest into the kubelet static manifest directory at /etc/kubernetes/manifests
-
❏ D. Write the manifest directly into the etcd datastore so the API server will pick it up
Which three components run on worker nodes and provide Pod lifecycle management network proxying and container execution? (Choose 3)
-
❏ A. GKE control plane
-
❏ B. kube-proxy
-
❏ C. kubelet agent
-
❏ D. Container runtime environment
A payments startup named BrightWave Labs wants its engineers to deploy short lived server side code packaged in containers that are fully managed by a cloud provider. Which cloud native approach best matches this requirement?
-
❏ A. Kubernetes managed clusters
-
❏ B. Edge computing
-
❏ C. Serverless function platforms
-
❏ D. Virtual machines
In a Kubernetes cluster what is the main purpose of using the API watch verb?
-
❏ A. Create a new resource instance
-
❏ B. Cloud Pub/Sub
-
❏ C. Establish a continuous stream to receive immediate notifications of resource changes
-
❏ D. Modify the current state of a resource
Why might a platform team adopt Open Policy Agent within a Kubernetes cluster?
-
❏ A. Cloud Monitoring
-
❏ B. Directly modify Kubernetes resources and configurations
-
❏ C. Enforce custom admission and governance policies for Kubernetes resources
-
❏ D. Build container images for deployment
A platform engineer at NovaSoft applied a taint to one of the cluster worker nodes so only select workloads would run there and they added a toleration to a storefront pod that matches that taint yet the pod appears on a different node in the cluster. Why did the taint and toleration not cause the pod to be scheduled onto the tainted node?
-
❏ A. Taints and tolerations are only enforced after pods have already been scheduled
-
❏ B. You needed to place the taint on the pod and the toleration on the node
-
❏ C. Taints and tolerations do not instruct a pod to go to a specific host and they only indicate whether a node will accept pods that tolerate its taints
-
❏ D. Use node affinity or a nodeSelector to target a pod to a particular node
You are leading a migration of a legacy monolithic application to a cloud native platform for a startup named MapleTech and the team requires deployments that do not interrupt users. Which deployment approach should you choose?
-
❏ A. Use a rolling update of application pods
-
❏ B. Use Cloud Run traffic splitting to shift requests gradually
-
❏ C. Adopt a blue green deployment and switch traffic between environments
-
❏ D. Scale the monolith and replace components with microservices over time
How can you ensure that two pods are never placed on the same node?
-
❏ A. Assign the same hostPort value to both pods
-
❏ B. Create a PodAffinity rule with topologyKey set to “kubernetes.io/hostname” and operator set to “NotIn”
-
❏ C. Use a NodeSelector that targets different labeled nodes for each pod
-
❏ D. Define a PodAntiAffinity rule that sets topologyKey to “kubernetes.io/hostname” and operator to “In”
TechHarbor operates a Kubernetes hosted microservices platform and uses Prometheus for monitoring. You need to alert the on call team when a service error rate rises above a defined threshold and you must avoid sending notifications during scheduled maintenance windows. How should you configure Prometheus alerts and notification silencing to meet these needs?
-
❏ A. Create the alert in Grafana and use Grafana scheduling to mute notifications during maintenance windows
-
❏ B. Export a maintenance_mode metric and change the Prometheus alert expression to only fire when maintenance_mode indicates the service is not under maintenance
-
❏ C. Define a Prometheus alert rule for the error rate and configure Alertmanager with scheduled silences to automatically suppress notifications during maintenance windows
-
❏ D. Create the alert in Prometheus and then create silences manually in Alertmanager for each maintenance window
In a Kubernetes cluster what is the correct architectural relationship between pods and containers?
-
❏ A. Containers are independently scheduled and run beside pods on each node
-
❏ B. A pod contains one or more containers that share the same network namespace storage and IP and are scheduled together
-
❏ C. A controller specification runs within a container which in turn executes inside a pod
-
❏ D. Controllers such as ReplicaSet or Deployment define pods which then encapsulate one or more containers
You are building a cloud native serverless service for a small media startup that processes user uploads. Traffic patterns are unpredictable and sometimes surge suddenly while at other times the service receives almost no requests. You want a design that keeps costs low while still scaling to meet peak demand. Which design approach best achieves this?
-
❏ A. Deploy the service on a fixed number of Kubernetes pods
-
❏ B. Deploy on Cloud Run and set a high minimum number of instances
-
❏ C. Use HTTP triggered serverless functions that automatically scale with incoming requests
-
❏ D. Keep serverless functions running continuously to handle new requests
Your team at AuroraCloud runs both stateful and stateless cloud native workloads. You already use Horizontal Pod Autoscaling for the stateless workloads and you need a solution that provides predictable and controlled scaling for stateful workloads. Which option should you evaluate?
-
❏ A. Vertical Pod Autoscaler (VPA)
-
❏ B. Cluster Autoscaler
-
❏ C. Custom Resource Definitions
-
❏ D. Kubernetes Operators
How should you write a node affinity rule so that a Pod is scheduled only onto nodes labeled tier=web?
-
❏ A. Use preferredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity with a weight and a match for tier=web
-
❏ B. Taint the target nodes and add a matching toleration to the Pod to place it on tier=web nodes
-
❏ C. Set requiredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity in the Pod manifest
-
❏ D. Add a nodeSelector entry in the Pod spec with tier=web
Your Kubernetes cluster contains some nodes that run Docker and other nodes that use containerd, and a legacy service requires Docker to operate correctly. How can you guarantee that this service’s pods are scheduled onto the nodes that run Docker?
-
❏ A. Label the Docker nodes and specify a nodeSelector in the pod specification to target them
-
❏ B. Set the runtimeClassName field in the pod specification to a Docker runtime handler
-
❏ C. Apply a taint to the nodes running Docker and add a matching toleration to the application pods
-
❏ D. Configure nodeAffinity in the pod spec to require nodes labeled as running Docker
Which Kubernetes node component is responsible for configuring the node level networking rules so that pods can communicate with each other across the node?
-
❏ A. Cloud Load Balancing
-
❏ B. CNI plugins
-
❏ C. kubelet
-
❏ D. kube-proxy
An infrastructure engineer is creating a cloud native continuous delivery pipeline for a Kubernetes hosted web application. Which architectural principle is most important to ensure the application can be deployed reliably and scaled in a cloud native environment?
-
❏ A. Package logging and metrics collectors inside the application container for convenience
-
❏ B. Design the application to be stateless and shift session state to an external datastore
-
❏ C. Run each replica with its own PersistentVolumeClaim and manage it with a StatefulSet for local storage
-
❏ D. Store secrets in a managed secret service such as Secret Manager
A DevOps team at Bluefin Systems needs to roll out a stateless service across a Kubernetes cluster with several worker nodes. The service must scale automatically when CPU load increases and remain highly available. You also must ensure that newly created Pods are distributed across different nodes so that no single node becomes overloaded. Which Kubernetes objects should you configure to meet these requirements?
-
❏ A. StatefulSet with a HorizontalPodAutoscaler and Cluster Autoscaler
-
❏ B. Deployment with a HorizontalPodAutoscaler and NodeAffinity
-
❏ C. CronJob with a VerticalPodAutoscaler and NodeAffinity
-
❏ D. Deployment with a HorizontalPodAutoscaler and PodAntiAffinity
Which practice can weaken the GitOps principle that the ‘Source of Truth’ for a Kubernetes deployment should reside in a single repository?
-
❏ A. Maintaining several Git repositories that each hold parts of the same cluster configuration
-
❏ B. Enforcing Git commit signatures to improve repository security
-
❏ C. Applying configuration changes straight to the cluster from a developer workstation
-
❏ D. Using a dedicated branch for each environment to separate staging and production manifests
In a Kubernetes cluster at Nimbus Systems which built in component handles DNS name resolution?
-
❏ A. kube-proxy
-
❏ B. dnsmasq
-
❏ C. CoreDNS
-
❏ D. etcd
While evaluating reliability targets like SLOs and SLAs engineers commonly use SLIs to quantify service behavior. What does the acronym SLI stand for?
-
❏ A. Cloud Monitoring
-
❏ B. Service Level Indicator
-
❏ C. Software License Integration
-
❏ D. Server Latency Index
Answers to the Certification Exam Simulator Questions
What is the main purpose of a Kubernetes “Node” within a cluster environment?
-
✓ B. Host and run containerized workloads
The correct answer is Host and run containerized workloads.
A Kubernetes node is a worker machine that provides CPU, memory, networking and local storage to run pods. It hosts the container runtime and the kubelet which manages pod lifecycle, and it is therefore responsible for running containerized workloads on behalf of the cluster.
Schedule pods onto machines based on resource requests is incorrect because scheduling is performed by the control plane scheduler which decides placement based on resource requests and node capacity and not by the node itself.
Offer long term persistent storage for pods is incorrect because persistent storage is provided by PersistentVolumes, storage classes and external storage systems. Nodes may provide ephemeral local storage but they do not serve as the cluster wide persistent storage solution.
Handle cluster wide logging and monitoring is incorrect because logging and monitoring are implemented by cluster level add ons and services such as Fluentd and Prometheus. Nodes can run agents for those systems but they do not own cluster wide logging and monitoring responsibilities.
When you see a question about a Node remember it is a worker machine. The cluster Scheduler places pods and the node actually runs them. Think separately about storage and logging because those are provided by other components.
You contribute to an Open Cloud Foundation project and you have been asked to evaluate a proposal that would introduce significant architectural changes to the codebase. According to community governance best practices what should you do first?
-
✓ C. Initiate a public Request For Comments process on the project mailing list
Initiate a public Request For Comments process on the project mailing list is the correct choice.
Starting a public Request For Comments on the project mailing list ensures that significant architectural changes are visible to the whole community and that maintainers contributors and users can provide feedback. This public process creates an archival record of the discussion documents trade offs and helps build consensus before code is merged.
Apply the changes in a private fork and merge them without public review is wrong because it bypasses community review and transparency and it prevents stakeholders from raising concerns before the change is adopted.
Deploy the modifications behind a feature flag and run internal validation is insufficient as the first step because testing and rollout mechanisms do not replace an open proposal and community discussion for major architecture decisions. Feature flags help with validation but they do not establish consensus or a public record of the design.
Refer the matter immediately to the project technical oversight committee for a ruling is not the best first action because most open source governance models prefer open discussion and an attempt to reach consensus before formal escalation. The oversight committee is usually consulted when the community cannot reach agreement or when an authoritative decision is required.
For large design changes start with a public discussion such as an RFC on the mailing list to gather transparent feedback and build consensus before implementing code.
When you are launching a cloud native service to a Kubernetes cluster for a small software firm which practice should be avoided?
-
✓ C. Editing live Kubernetes objects in the cluster without version control
Editing live Kubernetes objects in the cluster without version control is the practice that should be avoided when launching a cloud native service to a Kubernetes cluster for a small software firm.
This approach leads to configuration drift and it removes an auditable history of changes which makes rollbacks and reproducing environments difficult. Small teams benefit from reproducible, reviewable changes and from automation that ties deployments to version controlled artifacts, and live in cluster edits bypass those safety nets.
Automating builds and rollouts with Cloud Build is not something to avoid because automating the pipeline reduces human error and ensures consistent artifacts and controlled rollouts.
Adopting a GitOps model for declarative application delivery is not something to avoid because GitOps keeps the desired state in version control and enables reconciliation, auditing, and reproducible deployments which are valuable for small firms.
Using Kubernetes native resources such as Deployments and StatefulSets for workload management is not something to avoid because those controllers provide built in lifecycle, update, and scaling semantics that are the correct primitives for managing stateless and stateful workloads in Kubernetes.
When a question contrasts editing live objects versus version controlled workflows pick the answer that emphasizes declarative configuration and version control. Those keywords usually indicate the safer, production ready practice.
Which of the following is not an advantage of employing a pull based monitoring model like Prometheus for observing an application?
-
✓ C. Makes setting up an event driven pipeline for logs and traces easier
The correct answer is Makes setting up an event driven pipeline for logs and traces easier.
This statement is not an advantage of a pull based monitoring model because Prometheus is designed to periodically scrape metrics from targets and it does not provide an event driven delivery mechanism for logs or traces. Prometheus excels at time series metrics gathered on a schedule and it is not intended to replace push based collectors or agents that stream events for logs and traces.
Pull architectures are good for maintaining a central list of targets and for allowing the monitoring server to control scrape timing and concurrency. Those properties help with discovery and with managing connection load but they do not make building event driven pipelines for logs and traces any easier.
Provides a single authoritative list of endpoints to monitor is incorrect because Prometheus service discovery and scrape configuration give you a central, authoritative view of scrape targets and that is a benefit of the pull model.
Reduces the chance of the metrics backend being overloaded by many incoming connections is incorrect because a pull model lets the server schedule and control scrapes so the backend avoids being overwhelmed by many clients pushing data at once. That control over scrape timing is an advantage.
Makes it straightforward to discover when a target is not responding to scrapes is incorrect because failed scrapes are observable in Prometheus and you can alert on absent metrics or on scrape failures. That makes detecting unresponsive targets easier with a pull model.
When a question contrasts pull and push think about who initiates collection and what kind of data is involved. Metrics are usually pulled on a schedule while logs and traces are commonly pushed in event driven pipelines.
On a Kubernetes cluster that spans several nodes at HexaCloud you observe a subset of microservices showing increased response times after you have ruled out network congestion and application bugs. What other likely cause could explain these latency symptoms?
-
✓ D. The kube-scheduler is unevenly assigning workloads across nodes
The kube-scheduler is unevenly assigning workloads across nodes is the correct option.
An uneven placement by the kube-scheduler can concentrate many pods on a small set of nodes and cause CPU and memory contention as well as CPU throttling and higher disk wait and cache eviction. When nodes are overloaded the pods running on them will exhibit increased response times even though networking and application code are fine. The scheduler decides placement using resource requests and limits node labels taints tolerations and affinity rules and misconfigured requests or skewed affinity policies can lead to an unbalanced distribution that produces the latency you observe.
Some pods are stuck in the Pending state is unlikely because pending pods are not scheduled and the typical symptom is missing or reduced capacity rather than running pods becoming slower. Pending state usually produces unavailable replicas or failed requests instead of a subset of running services showing degraded latency.
Cloud Load Balancer misconfiguration would generally affect routing or external ingress and cause connection failures or broad latency across traffic through that balancer. Since network congestion was ruled out and the problem appears tied to node-local performance it is a less likely explanation.
A StatefulSet was deployed without an associated Service impacts stable network identity and service discovery and would more likely make those pods unreachable or harder to address rather than cause slower responses for pods that are running on overloaded nodes.
When you see increased latency check node CPU and memory usage and the pod distribution across nodes with kubectl and the metrics server before assuming application changes.
When running services that need stable network identities and durable storage which Kubernetes controller manages pods so they retain their identity and attached storage across restarts?
-
✓ C. StatefulSet
The correct answer is StatefulSet.
A StatefulSet is the Kubernetes controller designed for workloads that need stable network identities and durable storage. It gives each pod a stable, unique network identity and preserves the association between a pod and its PersistentVolumeClaim so the same storage is reattached to the same pod across restarts and rescheduling. A StatefulSet also provides ordered and graceful scaling and updates which helps stateful services maintain consistency.
Deployment manages stateless ReplicaSets for scaling and rolling updates and it does not guarantee stable per pod network identities or persistent per pod volumes. That makes Deployment unsuitable for workloads that require durable per pod storage and stable identities.
PersistentVolumeClaim is a resource that requests storage and it is not a controller that manages pod lifecycle or identity. A PVC provides storage but it does not by itself ensure pods retain identity across restarts.
DaemonSet ensures a copy of a pod runs on each node or a selected set of nodes and it is used for node level daemons. A DaemonSet does not provide stable per pod network identities or per pod durable storage semantics required by stateful services.
When a question mentions stable network identity and per pod durable storage think of StatefulSet rather than Deployments or DaemonSets.
What advantages result from adopting a plugin model for Kubernetes subsystems such as the Container Storage Interface the Container Networking Interface and the Service Mesh Interface?
-
✓ D. Allows operators to pick different storage and networking implementations and avoid vendor lock in
The correct option is Allows operators to pick different storage and networking implementations and avoid vendor lock in.
A plugin model defines stable, well documented interfaces such as CSI, CNI, and SMI so that multiple vendors and projects can implement compatible drivers and controllers. This separation of interface and implementation lets operators choose the best implementation for their needs and replace one provider with another without changing workloads or core Kubernetes behavior.
Creates vendor lock in so clusters can only use supported providers is incorrect because the whole point of a plugin API is to prevent vendor lock in by enabling many implementations to coexist and be swapped.
Makes it necessary to use managed cloud services such as GKE add ons for proper plugin operation is incorrect because plugins can run in any conformant Kubernetes cluster and are not inherently tied to managed cloud add ons. Managed services may provide convenience but they are not required.
Adds communication overhead between plugins and backends which can increase request latency is incorrect as a general statement because well designed plugins use efficient RPCs and the interface itself does not impose prohibitive latency. Specific implementations may have overhead depending on architecture but that is not a reason to reject the plugin model.
When you see questions about plugin models look for answers that mention interchangeability or avoidance of vendor lock in. The exam usually rewards understanding of abstraction and standard interfaces.
On a Kubernetes worker node what responsibility does the kubelet have in relation to the node’s container runtime?
-
✓ C. To act as the node agent that manages container lifecycles and communicates with the container runtime
The correct option is To act as the node agent that manages container lifecycles and communicates with the container runtime.
The kubelet is the primary node agent that runs on every worker node and it ensures that containers described in PodSpecs are running and healthy. It communicates with the container runtime through the Container Runtime Interface to start and stop containers, it monitors container status and resource usage, and it reports the node and Pod status back to the control plane. The kubelet also handles image pulling and respects liveness and readiness probes to maintain the desired state of workloads.
To schedule Pods onto the node is incorrect because scheduling is performed by the kube-scheduler and not by the kubelet. The scheduler decides which node should receive a Pod and the kubelet on that node then implements the PodSpec.
To route service traffic on the node using kube-proxy is incorrect because networking and service proxying are the responsibility of kube-proxy or other CNI components. The kubelet does not perform service routing or proxying.
To authenticate node metrics to Cloud Monitoring for collection is incorrect because metrics collection and exporting to cloud monitoring systems are handled by metrics agents or the metrics-server and by separate cloud integrations. The kubelet may expose metrics, but it is not responsible for authenticating and shipping them to external monitoring services.
When you see a component name think about where it runs. If the task is about enforcing PodSpec or managing containers on the node then kubelet is likely responsible. If the task is about selecting nodes or routing network traffic then look to the scheduler or kube-proxy.
At Atlas Systems you are setting up user authentication for a Kubernetes cluster and you are evaluating several approaches. Why is relying on static password or token files typically discouraged for authenticating users to the cluster?
-
✓ A. Static token or password files are stored in readable form and can be copied and shared, weakening access controls
The correct answer is Static token or password files are stored in readable form and can be copied and shared, weakening access controls.
Storing credentials as static files means the secret material exists in a readable form on disk or in simple files and it can be copied or redistributed without trace. This makes it difficult to enforce accountability and it is hard to revoke access quickly because any copied file continues to work until it is rotated or removed. For these reasons relying on static credential files weakens access controls and increases the risk of unauthorized cluster access.
Static files by themselves cannot enforce multifactor authentication is incorrect because although static files do not implement MFA on their own the question is asking for the primary reason static files are discouraged and that primary reason is their readability and copyability which directly weakens access controls.
Using static credential files requires manual updates when user access changes which increases administrative overhead is incorrect because the operational overhead of manual updates is a valid drawback but it is not the central security concern emphasized by the correct answer which focuses on easily copied readable credentials.
Static credential files do not integrate with external identity systems such as Cloud IAM or OpenID Connect is incorrect because lack of integration is an interoperability limitation but it does not by itself capture the core security problem that static files can be copied and shared and therefore weaken access controls.
When choosing between authentication answers pick the option that highlights concrete security risks such as credentials that can be copied or cannot be revoked quickly. Treat operational or integration drawbacks as secondary unless the question explicitly asks about them.
What is the usual cadence for publishing new Kubernetes minor releases and how frequently are patches issued?
-
✓ C. Roughly every 120 days
The correct answer is Roughly every 120 days.
A new minor Kubernetes release is planned on an approximate three month cadence so a minor version is released Roughly every 120 days. Patch releases for supported branches are issued much more frequently and are published as needed to address critical fixes and security issues, and they commonly appear on a near weekly cadence for active branches.
Once each week is incorrect because that would describe the typical frequency of patch releases rather than minor version releases, and a weekly minor cadence would produce far too many feature bumps.
Two releases each month is incorrect because that implies a biweekly minor release schedule which is much faster than the established roughly three month cycle.
Every six months is incorrect because the project aims for an approximately quarterly minor cadence rather than a semiannual one.
When a question asks about release cadence look for words like roughly 120 days or quarterly and remember that patches are released frequently as needed rather than on the same cadence as minors.
Which Kubernetes ingress implementation uses Envoy as its data plane to forward incoming HTTP and HTTPS requests to backend services?
-
✓ D. Istio Ingress Gateway
The correct answer is Istio Ingress Gateway.
Istio Ingress Gateway uses Envoy as its data plane. The ingress gateway is deployed as an Envoy proxy in Kubernetes and it receives HTTP and HTTPS traffic then forwards that traffic to backend services according to Istio routing rules. The Istio control plane programs Envoy proxies with routing, TLS, and policy via the xDS APIs so Envoy is the actual forwarding data plane.
HAProxy Ingress Controller is not correct because it uses HAProxy as the proxy implementation rather than Envoy. It therefore does not rely on Envoy as the data plane for forwarding.
Kong Ingress Controller is not correct because Kong uses the Kong Gateway or its own proxy components as the data plane by default. Some deployments can integrate Kong with Envoy but that is not the typical Kubernetes ingress controller setup.
Traefik Ingress Controller is not correct because Traefik runs the Traefik proxy as its own data plane. It does not use Envoy for forwarding in the normal configuration.
When a question asks which ingress uses Envoy as the data plane think of Istio and its Ingress Gateway because Istio deploys Envoy proxies to handle ingress traffic.
Acme Cloud operates a stateful service that needs durable storage in a Kubernetes cluster, and the team wants provisioning to happen automatically. Which Kubernetes resource should they create to request dynamically provisioned storage?
-
✓ D. PersistentVolumeClaim
The correct option is PersistentVolumeClaim.
PersistentVolumeClaim is a namespaced object that requests storage for a pod and represents the consumer side of the storage allocation. When a claim references an StorageClass and dynamic provisioning is available the control plane will create a matching PersistentVolume and bind it to the claim so pods can mount durable storage for stateful workloads.
StorageClass is incorrect because it defines how volumes are provisioned and the parameters for the provisioner but it is not the resource you create to actually request storage from a pod. Claims reference a StorageClass but the claim is the request.
ConfigMap is incorrect because it stores configuration data as key value pairs and is not used for persistent block or file storage for stateful application data.
Filestore is incorrect because it is not a standard Kubernetes object to request dynamic storage. It usually refers to a cloud provider managed file service and is not the in cluster resource you create to request a PersistentVolume.
When you see the words durable and dynamically provisioned map them to creating a PersistentVolumeClaim which will trigger provisioning via a StorageClass.
Your team operates a Kubernetes cluster that hosts a microservice which exports custom health metrics to Prometheus. Which component should you use to generate alerts from those metrics?
-
✓ D. Alertmanager
The correct option is Alertmanager.
Prometheus evaluates custom health metrics against alerting rules and then sends firing alerts to Alertmanager. Alertmanager is the component that groups and silences alerts and routes them to notification receivers such as email, PagerDuty, or chat systems.
You author alerting rules in Prometheus or via the Prometheus Operator so that metrics crossing thresholds become alerts, and those alerts are managed and dispatched by Alertmanager.
Grafana is primarily a visualization and dashboarding tool. It can display metrics and it also offers alerting features, but it is not the standard component used to receive and centrally manage Prometheus alert notifications in the typical Prometheus architecture.
Cloud Monitoring is a cloud provider monitoring service and it may ingest metrics or provide its own alerting, but it is not the Prometheus alert routing and notification component that handles Prometheus alert delivery and silencing.
Fluentd is a log collection and forwarding agent and it is not used to evaluate metrics or manage alert notifications, so it is not the correct component for generating or routing Prometheus alerts.
Remember that Prometheus evaluates alerting rules and that Alertmanager handles grouping, silencing and sending notifications. Look for wording about routing and notifications to identify Alertmanager on the exam.
NimbusTek enforces tight runtime restrictions. You must make sure that no container in any pod can obtain Linux capabilities that allow root level network operations. Which Kubernetes mechanisms should you configure? (Choose 2)
-
✓ B. Pod security policy
-
✓ E. Security context
The correct answer is Pod security policy and Security context.
Security context is configured on pods and containers and it controls Linux capabilities and user settings. You can explicitly drop capabilities such as NET_ADMIN and NET_RAW or prevent containers from running as root to stop root level network operations at the runtime level.
Pod security policy is a cluster admission control that enforces pod level security constraints and it can require that containers do not request or add forbidden capabilities. This makes it useful to prevent any pod from obtaining network related capabilities across the cluster. Note that Pod security policy has been deprecated in upstream Kubernetes and removed in recent releases so newer clusters and exams often prefer Pod Security Admission or policy controllers like OPA Gatekeeper for the same purpose.
NetworkPolicy controls network traffic flow between pods and namespaces and it does not control Linux capabilities or whether a container can gain NET_ADMIN or NET_RAW. It cannot prevent a container from adding kernel capabilities.
GKE Workload Identity maps Kubernetes service accounts to Google service accounts to manage cloud permissions and it does not manage container runtime capabilities or kernel level network privileges.
Role Based Access Control governs API access and who can create or modify Kubernetes objects and it does not enforce runtime constraints inside containers such as dropping capabilities.
When a question asks about preventing containers from obtaining kernel capabilities focus on container securityContext for immediate enforcement and on cluster admission controllers or policy engines for cluster wide enforcement. Remember that Pod security policy is deprecated and newer exams may reference Pod Security Admission or policy controllers instead.
You are a release engineer working on a cloud native project under the governance of the Cloud Native Computing Foundation and the community is updating its governance model. Which of the following is not a typical component of CNCF style open source project governance?
-
✓ B. Proprietary licensing model
Proprietary licensing model is the correct answer because CNCF style projects are by definition open source and the governance model expects open license choices rather than a proprietary licensing model.
Contributor License Agreements are commonly used in CNCF and other open source projects to clarify contributor rights and to make license compliance and maintenance easier for the project.
Technical Steering Committee or similar technical governing bodies are a typical governance component and they provide technical direction, approve proposals, and resolve disputes.
Maintainer roles and responsibilities are part of normal project governance so that code ownership, reviews, and release duties are clearly assigned and managed.
Code of Conduct is an expected part of community governance because it sets behavioral norms and helps keep the project welcoming and productive for contributors.
When answering governance questions look for items that support an open and collaborative community and rule out options that imply closed or proprietary control.
What are two ways to create pods on a worker node by providing a YAML or JSON manifest file? (Choose 2)
-
✓ B. Run kubectl create or kubectl apply with the manifest file against the cluster API
-
✓ C. Place the pod manifest into the kubelet static manifest directory at /etc/kubernetes/manifests
Run kubectl create or kubectl apply with the manifest file against the cluster API and Place the pod manifest into the kubelet static manifest directory at /etc/kubernetes/manifests are correct.
Run kubectl create or kubectl apply with the manifest file against the cluster API is correct because kubectl sends the manifest to the API server which validates and persists the object to the cluster datastore and then controllers and the scheduler ensure the pod is scheduled and the kubelet on a worker node starts the containers.
Place the pod manifest into the kubelet static manifest directory at /etc/kubernetes/manifests is correct because the kubelet watches that directory and will create and manage a static pod from the file on that node without first submitting it through the normal API create path.
Copy the manifest into the kube scheduler configuration folder is wrong because the kube-scheduler only makes scheduling decisions and does not create pods from manifest files placed in its config folder.
Write the manifest directly into the etcd datastore so the API server will pick it up is wrong because etcd is the cluster backing store and you must use the API server to create objects. Directly modifying etcd is unsupported and can break validation and cluster integrity.
When in doubt choose the option that uses the API server or the kubelet static manifest directory. Never modify etcd directly and prefer kubectl apply for idempotent manifest changes.
Which three components run on worker nodes and provide Pod lifecycle management network proxying and container execution? (Choose 3)
-
✓ B. kube-proxy
-
✓ C. kubelet agent
-
✓ D. Container runtime environment
The correct options are kube-proxy, kubelet agent, and Container runtime environment.
kubelet agent runs on every worker node and it is responsible for Pod lifecycle management. The kubelet watches the API server for PodSpecs for that node and ensures containers are started, healthy, and restarted when needed.
kube-proxy runs on each worker node and it provides network proxying for Services. It programs the node networking rules and forwards traffic to the appropriate Pod endpoints so cluster Services are reachable from inside and sometimes outside the cluster.
Container runtime environment runs on worker nodes and it is the component that actually executes container images. Runtimes such as containerd or CRI O implement the Container Runtime Interface and allow the kubelet to create and manage container processes on the node.
GKE control plane is incorrect because it refers to the managed control plane components that run outside of the worker nodes. The control plane hosts the API server, scheduler, and controller manager and it does not perform node local duties like container execution or node level networking.
When deciding which components run on worker nodes ask if the component is node local or part of the control plane. Remember that the kubelet is the node agent, kube-proxy handles node networking, and the container runtime actually runs containers.
A payments startup named BrightWave Labs wants its engineers to deploy short lived server side code packaged in containers that are fully managed by a cloud provider. Which cloud native approach best matches this requirement?
-
✓ C. Serverless function platforms
The correct option is Serverless function platforms.
Serverless function platforms let engineers deploy short lived server side code without managing servers or clusters. They provide automatic scaling and pay per execution and many providers accept container images so packaging code in containers is compatible with the requirement. The provider manages runtime, scaling, and infrastructure so the team does not need to operate virtual machines or orchestrate a cluster.
Kubernetes managed clusters run containers at scale but even managed control planes still require deployment, configuration, and operational oversight. They are better suited to long lived services and complex orchestration than to tiny ephemeral functions that must be fully managed by the cloud provider.
Edge computing focuses on running workloads close to users to reduce latency and distribute processing, and it does not specifically match the requirement for a fully managed platform for short lived server side code. Some edge offerings can be serverless but that is not the primary fit for the question.
Virtual machines provide full operating system instances that must be provisioned and managed and they do not offer the same automatic, pay per invocation model for ephemeral workloads. VMs are generally used for longer lived services rather than function style, short lived code.
When a question stresses short lived execution and fully managed infrastructure think about function or serverless platforms and look for wording about automatic scaling or pay per execution in the answers.
In a Kubernetes cluster what is the main purpose of using the API watch verb?
-
✓ C. Establish a continuous stream to receive immediate notifications of resource changes
The correct option is Establish a continuous stream to receive immediate notifications of resource changes.
The Establish a continuous stream to receive immediate notifications of resource changes watch verb lets clients open a long lived connection to the API server so they receive events when resources are added, modified, or deleted. The API server sends discrete event types such as ADDED, MODIFIED, and DELETED and clients can use resourceVersion to resume or continue a watch stream reliably.
Create a new resource instance is incorrect because creating resources uses POST or apply operations and not the watch verb which only observes changes.
Cloud Pub/Sub is incorrect because that is a separate messaging service from Google and it is not a Kubernetes API verb or feature for observing resource changes.
Modify the current state of a resource is incorrect because changing a resource uses update or patch operations and not watch which only reports events about changes.
When an option mentions streaming or immediate notifications think watch rather than create or modify verbs. Try kubectl get -w to see watch behavior in a live cluster.
Why might a platform team adopt Open Policy Agent within a Kubernetes cluster?
-
✓ C. Enforce custom admission and governance policies for Kubernetes resources
Enforce custom admission and governance policies for Kubernetes resources is correct. Open Policy Agent is used inside clusters to evaluate admission requests and to enforce governance rules before resources are admitted into the API server.
OPA provides policy as code with the Rego language and it integrates with Kubernetes through admission webhooks or through projects like Gatekeeper. This lets platform teams codify rules about allowed images, required labels, resource limits, and other constraints so that noncompliant resources are blocked at admission time.
Cloud Monitoring is incorrect because monitoring systems collect metrics and logs and they do not act as a policy engine that evaluates and enforces admission or governance rules.
Directly modify Kubernetes resources and configurations is incorrect because OPA is primarily a decision point that evaluates requests and returns allow or deny outcomes. It is not a general configuration management tool that performs arbitrary changes across cluster resources.
Build container images for deployment is incorrect because creating container images is the responsibility of build and CI tools and not a policy engine. Image building is handled by tools like Docker, Kaniko, or CI pipelines and not by OPA.
When you see words like admission or governance think policy as code and admission controllers rather than build or monitoring tools.
A platform engineer at NovaSoft applied a taint to one of the cluster worker nodes so only select workloads would run there and they added a toleration to a storefront pod that matches that taint yet the pod appears on a different node in the cluster. Why did the taint and toleration not cause the pod to be scheduled onto the tainted node?
-
✓ C. Taints and tolerations do not instruct a pod to go to a specific host and they only indicate whether a node will accept pods that tolerate its taints
The correct answer is Taints and tolerations do not instruct a pod to go to a specific host and they only indicate whether a node will accept pods that tolerate its taints.
This is correct because taints are a node sided mechanism that repel pods that do not have matching tolerations. A toleration on a pod simply allows the scheduler to consider that pod for a tainted node. It does not act as an instruction to place the pod on that node and the scheduler still evaluates other constraints such as resource availability, selectors, and affinity rules.
Taints and tolerations are only enforced after pods have already been scheduled is wrong because taints and tolerations are evaluated during scheduling to prevent or allow placement on nodes and they can also cause eviction when a taint with the appropriate effect is added to a node.
You needed to place the taint on the pod and the toleration on the node is wrong because the model is the opposite. Taints are applied to nodes and tolerations are specified on pods so that the pod can tolerate a node’s taint.
Use node affinity or a nodeSelector to target a pod to a particular node is not the right answer to why the toleration did not force placement. While node affinity or a nodeSelector are the correct tools to direct a pod to a particular node, the question asked why the taint and toleration did not by themselves cause the pod to land on the tainted node.
When you see questions about taints and tolerations remember that tolerations make a pod eligible for a tainted node but they do not force placement. Use nodeSelector or node affinity when you need to pin a pod to specific nodes.
You are leading a migration of a legacy monolithic application to a cloud native platform for a startup named MapleTech and the team requires deployments that do not interrupt users. Which deployment approach should you choose?
-
✓ C. Adopt a blue green deployment and switch traffic between environments
The correct choice is Adopt a blue green deployment and switch traffic between environments.
Adopt a blue green deployment and switch traffic between environments works because you deploy the new version into a separate but identical environment and then shift user traffic from the old environment to the new one through the load balancer or service router. This approach minimizes or eliminates user-visible interruption because the new environment is fully warmed and health checked before any traffic is routed to it. It also provides an immediate and safe rollback path by switching traffic back to the previous environment if a problem is detected.
Adopt a blue green deployment and switch traffic between environments is especially useful for migrating a monolith because you can deploy a complete copy of the application, validate it under real traffic patterns, and then cut over without in place upgrades that risk disrupting sessions or compatibility.
Use a rolling update of application pods is not the best answer because rolling updates replace pods in place and can still cause interruptions for stateful or sessionful monoliths and they do not provide the same instant cutover and rollback separation that blue green provides.
Use Cloud Run traffic splitting to shift requests gradually is platform specific and is about shifting traffic between revisions in Cloud Run. It can be useful for controlled rollouts, but it is not a general blue green cutover method and may not suit a legacy monolith without refactoring or moving to that specific platform.
Scale the monolith and replace components with microservices over time is a migration strategy rather than a deployment technique for zero downtime. Scaling and gradual refactoring help long term, but they do not by themselves guarantee interruption free deployments during a release.
When a question asks about no interruption look for answers that mention switching whole environments or instant cutover and rollback. Blue green and canary strategies are deployment patterns while approaches like refactoring are migration plans.
How can you ensure that two pods are never placed on the same node?
-
✓ D. Define a PodAntiAffinity rule that sets topologyKey to “kubernetes.io/hostname” and operator to “In”
The correct option is Define a PodAntiAffinity rule that sets topologyKey to “kubernetes.io/hostname” and operator to “In”. This explicitly tells the scheduler to avoid placing a pod on any node that already runs a matching pod on the same hostname.
A PodAntiAffinity term with operator “In” and topologyKey “kubernetes.io/hostname” expresses an exclusion rule based on hostnames. When used as a requiredDuringSchedulingIgnoredDuringExecution rule the scheduler will refuse to schedule a pod onto a node that already hosts a pod that matches the term and labels. That enforces that the two pods are never colocated on the same node.
Assign the same hostPort value to both pods is incorrect because hostPort is a network binding and not a declarative scheduling primitive. It can lead to runtime port conflicts or failed container start up but it is not the supported way to express placement intent to the scheduler.
Create a PodAffinity rule with topologyKey set to “kubernetes.io/hostname” and operator set to “NotIn” is incorrect because PodAffinity is intended for co locating pods and not for excluding placement. Using an affinity term with NotIn does not convey the same exclusion semantics as a PodAntiAffinity rule with operator In.
Use a NodeSelector that targets different labeled nodes for each pod is incorrect because NodeSelector enforces static label based placement and requires manual label management. It can be brittle and does not express the dynamic intent to avoid colocating pods the way PodAntiAffinity does.
When you need hard separation use requiredDuringSchedulingIgnoredDuringExecution in a PodAntiAffinity with topologyKey set to kubernetes.io/hostname so the scheduler will block placement on nodes that already host matching pods.
TechHarbor operates a Kubernetes hosted microservices platform and uses Prometheus for monitoring. You need to alert the on call team when a service error rate rises above a defined threshold and you must avoid sending notifications during scheduled maintenance windows. How should you configure Prometheus alerts and notification silencing to meet these needs?
-
✓ C. Define a Prometheus alert rule for the error rate and configure Alertmanager with scheduled silences to automatically suppress notifications during maintenance windows
The correct option is Define a Prometheus alert rule for the error rate and configure Alertmanager with scheduled silences to automatically suppress notifications during maintenance windows.
Define a Prometheus alert rule for the error rate and configure Alertmanager with scheduled silences to automatically suppress notifications during maintenance windows is correct because Prometheus is responsible for evaluating the error rate and Alertmanager is designed to handle notification routing and suppression. You keep detection and firing in Prometheus and use Alertmanager silences to stop notifications during planned maintenance windows.
Alertmanager silences support start and end times and they can be created or updated programmatically so your scheduled maintenance windows can be enforced automatically without losing the alert history. This preserves the alert state while preventing on call noise during maintenance and it still lets you review alerts after maintenance ends.
Create the alert in Grafana and use Grafana scheduling to mute notifications during maintenance windows is incorrect because the environment uses Prometheus for monitoring and the standard, reliable pattern is to use Prometheus alert rules with Alertmanager for suppression and routing. Using Grafana alerts and schedules duplicates responsibilities and it is not the native mechanism for suppressing Prometheus alerts.
Export a maintenance_mode metric and change the Prometheus alert expression to only fire when maintenance_mode indicates the service is not under maintenance is incorrect because embedding maintenance state into every alert expression is fragile and hard to maintain. It can lead to missed alerts if the metric is not set correctly and it spreads maintenance logic into many rules instead of centralizing suppression in Alertmanager.
Create the alert in Prometheus and then create silences manually in Alertmanager for each maintenance window is incorrect because manual silences are error prone and do not scale for scheduled windows. The question requires avoiding notifications during scheduled maintenance automatically and scheduled silences or automated silence creation are the proper solution.
Keep alerting logic in Prometheus alert rules and manage suppression with Alertmanager silences. Automate silence creation with the Alertmanager API or a scheduler to avoid manual errors.
In a Kubernetes cluster what is the correct architectural relationship between pods and containers?
-
✓ B. A pod contains one or more containers that share the same network namespace storage and IP and are scheduled together
The correct answer is A pod contains one or more containers that share the same network namespace storage and IP and are scheduled together.
This is correct because a pod is the smallest deployable unit in Kubernetes and it groups one or more containers that are colocated and scheduled onto the same node. Containers in the same pod share the pod network namespace so they share the same IP address and can communicate over localhost, and they can also share storage volumes that are mounted into the pod.
Containers are independently scheduled and run beside pods on each node is wrong because scheduling happens at the pod level and containers run inside pods rather than running independently beside them.
A controller specification runs within a container which in turn executes inside a pod is wrong because a controller specification is an API object in the control plane and does not itself run inside a container. Controller logic is implemented by controller processes that may run in the control plane or as controller pods, but the spec is an object not an executable container image.
Controllers such as ReplicaSet or Deployment define pods which then encapsulate one or more containers is misleading and marked incorrect here because controllers define pod templates and manage creation of pod replicas rather than directly being the pod. The essential architectural relationship the exam expects is that pods encapsulate containers and are the schedulable unit.
On exam questions focus on the fact that pods are the smallest schedulable unit and that containers run inside pods and share network and storage within that pod.
You are building a cloud native serverless service for a small media startup that processes user uploads. Traffic patterns are unpredictable and sometimes surge suddenly while at other times the service receives almost no requests. You want a design that keeps costs low while still scaling to meet peak demand. Which design approach best achieves this?
-
✓ C. Use HTTP triggered serverless functions that automatically scale with incoming requests
The correct option is Use HTTP triggered serverless functions that automatically scale with incoming requests.
Use HTTP triggered serverless functions that automatically scale with incoming requests is the best fit because the functions scale out automatically when traffic surges and they scale to zero when there is no traffic, so you only pay for actual usage. This behavior keeps costs low for unpredictable workloads while still providing rapid scaling to meet peak demand.
Use HTTP triggered serverless functions that automatically scale with incoming requests also removes the operational overhead of managing server capacity and manual scaling rules, and it is a good match for short lived, event driven processing like user uploads and transformations.
Deploy the service on a fixed number of Kubernetes pods is incorrect because a fixed number of pods cannot absorb sudden traffic spikes and it wastes cost during quiet periods. You would need autoscaling and additional management to handle variable load which defeats the simplicity and cost advantage described in the question.
Deploy on Cloud Run and set a high minimum number of instances is incorrect because setting a high minimum keeps instances running even when idle and that increases cost. Cloud Run can scale to zero by default and raising the minimum negates the cost benefit for unpredictable workloads.
Keep serverless functions running continuously to handle new requests is incorrect because keeping functions always warm defeats the serverless model and causes unnecessary cost. The advantage of serverless is automatic scaling and billing based on usage rather than maintaining always on resources.
When you see options that mention keeping instances always on or a high minimum think about cost trade offs. For unpredictable bursts prefer solutions that scale to zero and let you pay per execution.
Your team at AuroraCloud runs both stateful and stateless cloud native workloads. You already use Horizontal Pod Autoscaling for the stateless workloads and you need a solution that provides predictable and controlled scaling for stateful workloads. Which option should you evaluate?
-
✓ D. Kubernetes Operators
Kubernetes Operators is the correct option.
Kubernetes Operators encapsulate application specific operational knowledge and implement custom controllers that can perform ordered and predictable scaling for stateful workloads. Operators can manage StatefulSets and external resources together and they can enforce policies such as ordered startup, graceful shutdown, and coordinated configuration changes while scaling. That level of application aware control is required when you need predictable and controlled scaling for stateful services.
Vertical Pod Autoscaler (VPA) is incorrect because it adjusts pod resource requests and limits rather than the number of replicas. Vertical Pod Autoscaler (VPA) may require pod restarts to change resources and it does not provide ordered or application aware replica lifecycle management needed for complex stateful systems.
Cluster Autoscaler is incorrect because it scales cluster nodes in response to pending pods and not the application replicas themselves. Cluster Autoscaler provides node capacity but it cannot implement application specific, ordered, or safe scaling policies for stateful workloads.
Custom Resource Definitions is incorrect because CRDs are a mechanism to extend the Kubernetes API and they do not by themselves implement scaling logic. Custom Resource Definitions are often used as the API surface for an Operator, but the CRD alone is just the data model and not the controller that enforces predictable scaling.
When asked about controlled scaling for stateful applications think about controllers that encode operational knowledge such as Operators rather than node level autoscalers or resource request tuning.
How should you write a node affinity rule so that a Pod is scheduled only onto nodes labeled tier=web?
-
✓ C. Set requiredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity in the Pod manifest
The correct answer is Set requiredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity in the Pod manifest.
Using requiredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity creates a hard node affinity constraint that the scheduler must satisfy. When you define a selector that matches nodes labeled tier=web the scheduler will only place the Pod on nodes that match. This enforces that the Pod is scheduled only onto tier=web nodes.
Use preferredDuringSchedulingIgnoredDuringExecution under spec.affinity.nodeAffinity with a weight and a match for tier=web is incorrect because it is a soft preference. The scheduler can place the Pod on other nodes if no matching nodes are available.
Taint the target nodes and add a matching toleration to the Pod to place it on tier=web nodes is incorrect because taints and tolerations are a different mechanism that admits or repels Pods from nodes rather than expressing a node affinity rule.
Add a nodeSelector entry in the Pod spec with tier=web is incorrect in this context because the question asked specifically about writing a node affinity rule. NodeSelector would also restrict placement but it is not the node affinity construct the question requests.
For a strict placement requirement choose requiredDuringSchedulingIgnoredDuringExecution in nodeAffinity. Remember that preferredDuringSchedulingIgnoredDuringExecution is only a preference and nodeSelector is not expressed as node affinity.
Your Kubernetes cluster contains some nodes that run Docker and other nodes that use containerd, and a legacy service requires Docker to operate correctly. How can you guarantee that this service’s pods are scheduled onto the nodes that run Docker?
-
✓ C. Apply a taint to the nodes running Docker and add a matching toleration to the application pods
Apply a taint to the nodes running Docker and add a matching toleration to the application pods is correct.
This approach uses Kubernetes scheduling primitives that let you repel all pods from a node unless they explicitly tolerate the taint. Taint the Docker nodes so ordinary pods will not be scheduled there and give the legacy service pods a matching toleration so they are allowed to run on those nodes. That combination effectively reserves the Docker nodes for that service and guarantees its pods land on nodes that run Docker.
Label the Docker nodes and specify a nodeSelector in the pod specification to target them is incorrect because a nodeSelector only directs pods to nodes with matching labels but it does not prevent other pods from also being scheduled onto those nodes. NodeSelector alone does not provide the reservation or isolation that taints and tolerations do.
Set the runtimeClassName field in the pod specification to a Docker runtime handler is incorrect because runtimeClassName selects a container runtime handler configured on the node and is not a reliable way to choose nodes by their underlying runtime. In addition the dockershim has been deprecated and removed so Docker is not the typical CRI runtime on newer clusters.
Configure nodeAffinity in the pod spec to require nodes labeled as running Docker is incorrect because required nodeAffinity can ensure pods prefer or require nodes with certain labels but it still does not prevent other pods from being scheduled to those nodes. NodeAffinity controls placement but it does not reserve nodes in the way taints and tolerations do.
When you need to “reserve” nodes for a specific workload look for taints and tolerations rather than only using labels. Taints repel pods and tolerations opt workloads back in.
Which Kubernetes node component is responsible for configuring the node level networking rules so that pods can communicate with each other across the node?
-
✓ D. kube-proxy
The correct option is kube-proxy.
kube-proxy is the node component that watches Services and Endpoints and programs the node level networking rules so that service IPs and ports are routed to the appropriate pod backends. It implements those rules using iptables or IPVS on the node which ensures that traffic addressed to a Service is forwarded to the correct pod across the node.
Cloud Load Balancing is incorrect because it refers to an external or cloud provider load balancer and not the Kubernetes node process that programs local service routing rules.
CNI plugins are incorrect for this specific question because they install pod network interfaces and take care of pod IP addressing and connectivity. They do help enable pod to pod communication but they do not manage the Service proxy rules on the node that kube-proxy programs.
kubelet is incorrect because it manages the pod and container lifecycle on the node and reports status to the control plane. It does not configure the node level service routing rules that kube-proxy applies.
When the question mentions node level networking rules or Service proxying think of kube-proxy. If the question is about pod interfaces or IP allocation think of CNI plugins.
An infrastructure engineer is creating a cloud native continuous delivery pipeline for a Kubernetes hosted web application. Which architectural principle is most important to ensure the application can be deployed reliably and scaled in a cloud native environment?
-
✓ B. Design the application to be stateless and shift session state to an external datastore
The correct answer is Design the application to be stateless and shift session state to an external datastore.
Designing the app to be stateless and shifting session state to an external datastore ensures that individual pod instances are ephemeral and interchangeable. This lets the orchestrator create, replace, or scale replicas without losing session data and it enables reliable rolling updates and autoscaling because state is not tied to any single instance.
Package logging and metrics collectors inside the application container for convenience is incorrect because bundling observability agents with the app breaks separation of concerns and makes images harder to update. Observability is better handled with sidecars or platform level agents so the application remains focused on business logic.
Run each replica with its own PersistentVolumeClaim and manage it with a StatefulSet for local storage is incorrect because StatefulSets and per-replica volumes are meant for workloads that need stable identity and local storage. Coupling a web front end to local volumes prevents flexible scheduling and simple horizontal scaling which undermines cloud native deployment and scaling goals.
Store secrets in a managed secret service such as Secret Manager is not the best answer for this question because secure secret storage is important but it does not by itself enable the reliable deploy and scale properties the question asks about. Using a managed secret service is recommended practice but it is not the primary architectural principle for scalable, cloud native deployment.
When choosing architecture answers look for options that enable horizontal scaling and ephemeral instances. The word stateless or phrases about moving session data to external stores are strong indicators of the correct choice.
A DevOps team at Bluefin Systems needs to roll out a stateless service across a Kubernetes cluster with several worker nodes. The service must scale automatically when CPU load increases and remain highly available. You also must ensure that newly created Pods are distributed across different nodes so that no single node becomes overloaded. Which Kubernetes objects should you configure to meet these requirements?
-
✓ D. Deployment with a HorizontalPodAutoscaler and PodAntiAffinity
The correct answer is: Deployment with a HorizontalPodAutoscaler and PodAntiAffinity.
A Deployment with a HorizontalPodAutoscaler and PodAntiAffinity is appropriate because a Deployment manages stateless replicas and supports rolling updates while the HorizontalPodAutoscaler adjusts the number of pod replicas automatically based on CPU load. The PodAntiAffinity rules ensure that new pods are scheduled on different nodes when possible so no single node becomes overloaded and the service remains highly available.
In practice you declare resource requests on the containers so the HorizontalPodAutoscaler can use CPU metrics effectively and you apply either preferred or required PodAntiAffinity depending on how strictly you need pods separated across nodes.
StatefulSet with a HorizontalPodAutoscaler and Cluster Autoscaler is incorrect because a StatefulSet is intended for workloads that need stable network identities and persistent storage and it is not needed for a stateless service. The Cluster Autoscaler scales nodes rather than pods and does not by itself guarantee that pods will be dispersed across nodes.
Deployment with a HorizontalPodAutoscaler and NodeAffinity is incorrect because NodeAffinity is used to prefer or require nodes with certain labels and that can concentrate pods onto specific nodes. Node affinity does not ensure spreading across different nodes the way pod anti affinity does.
CronJob with a VerticalPodAutoscaler and NodeAffinity is incorrect because a CronJob runs scheduled batch jobs rather than a continuously running service. A VerticalPodAutoscaler adjusts container resource requests rather than scaling the number of replicas based on CPU load and it can conflict with horizontal scaling strategies.
Read the workload type keywords. If the question says stateless and asks for spreading pods across nodes pick a Deployment with an HorizontalPodAutoscaler and PodAntiAffinity. Also make sure pods declare resource requests so the HPA can use CPU metrics.
Which practice can weaken the GitOps principle that the ‘Source of Truth’ for a Kubernetes deployment should reside in a single repository?
-
✓ C. Applying configuration changes straight to the cluster from a developer workstation
The correct answer is Applying configuration changes straight to the cluster from a developer workstation.
This practice weakens the GitOps principle because it bypasses the repository that should act as the single source of truth. Making changes directly on the cluster means those changes are not recorded in the Git history and they can diverge from the declared state in the repository. GitOps reconciliation controllers expect the repository to contain the canonical desired state and they will either overwrite manual changes or fail to detect intentional drift when the repo is not updated.
Maintaining several Git repositories that each hold parts of the same cluster configuration is not necessarily wrong. Many organizations use a multi-repo approach and preserve a single logical source of truth by clearly scoping repositories and using automation to assemble or link the canonical configuration. Proper conventions and tooling can keep the source of truth intact even when manifests are split across repos.
Enforcing Git commit signatures to improve repository security does not weaken the source of truth. Signed commits actually strengthen trust in the repository by verifying authorship and integrity. This practice supports GitOps goals rather than undermining them.
Using a dedicated branch for each environment to separate staging and production manifests is also not inherently weakening. Branch-per-environment is a common pattern and it still keeps the declared state under version control. The pattern can lead to complexity if branches drift, but when branches are kept up to date and reconciled they preserve a clear source of truth.
When you see GitOps source of truth questions look for answers that bypass version control such as direct edits to the cluster. Changes should be made in Git so the reconciliation loop can manage drift.
In a Kubernetes cluster at Nimbus Systems which built in component handles DNS name resolution?
-
✓ C. CoreDNS
The correct answer is CoreDNS.
CoreDNS runs as the cluster DNS add on and provides name resolution for Pods and Services. It watches the Kubernetes API and serves DNS records so that service names resolve to the correct cluster IPs and endpoints.
kube-proxy is incorrect because it implements Service networking and node level packet forwarding rules and does not perform DNS name resolution.
dnsmasq is incorrect because it is a general purpose DNS forwarder and cache that may be used in some environments but it is not the built in Kubernetes cluster DNS implementation.
etcd is incorrect because it is the cluster key value store for persistent state and configuration and it does not handle DNS lookups.
When a question asks which component provides DNS think of the cluster DNS add on and answer CoreDNS rather than networking or storage components.
While evaluating reliability targets like SLOs and SLAs engineers commonly use SLIs to quantify service behavior. What does the acronym SLI stand for?
-
✓ B. Service Level Indicator
The correct answer is Service Level Indicator.
Service Level Indicator is the established term for a measurable metric that reflects a specific aspect of a service’s performance or reliability. Engineers use SLIs to quantify things like latency, availability, throughput, and error rates so that SLOs and SLAs can be evaluated against concrete data.
Cloud Monitoring is incorrect because it is the name of a monitoring product and not the expansion of the SLI acronym.
Software License Integration is incorrect because it refers to licensing concerns and not to service performance metrics.
Server Latency Index is incorrect because it suggests a single latency-focused index while SLI is a general term for any service level metric and not a specific latency index.
When a question mentions SLOs or SLAs look for the choice that names a measurable metric. Remember that Service Level Indicator is the term used for those metrics.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
