KCNA Study Guide and Certification Questions
All exam questions come from my KCNA Udemy Course and certificationexams.pro
Kubernetes KCNA Certification Exam Topics
Over the past few months, I have been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who want to learn cloud native technologies gain the skills and certifications needed to stay competitive in a rapidly evolving industry.
One of the most respected entry-level cloud native certifications available today is the CNCF Kubernetes and Cloud Native Associate (KCNA).
So how do you pass the KCNA certification? You practice by using KCNA exam simulators, going over sample KCNA test questions, and taking online KCNA practice exams like this one.
Keep practicing until you can consistently answer Kubernetes and cloud native questions with confidence.
KCNA Certified Associate Practice Questions
In helping students prepare for this exam, I have identified a number of commonly misunderstood KCNA topics that tend to appear in practice questions, which is why this set of KCNA questions and answers was created. If you can answer these correctly, you are well on your way to passing the exam.
One important note: these are not KCNA exam dumps. There are plenty of CNCF braindump websites that focus on cheating, but there is no value in earning a certification without real knowledge. These questions are representative of the KCNA exam style and subjects but are not duplicates of real exam content.
Now here are the KCNA practice questions and answers. Good luck!
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Certification Practice Exam Questions
All exam questions come from my KCNA Udemy Course and certificationexams.pro
In a cloud native microservice environment at a payments startup what is the primary function of an anti corruption layer that sits between different service domains?
-
❏ A. Apigee
-
❏ B. To block SQL injection attacks
-
❏ C. To translate and isolate differing domain models
-
❏ D. To validate that incoming data matches a schema
An engineering group at Nimbus Systems is composing a NetworkPolicy manifest for their Kubernetes cluster and they need to declare allowed traffic. Which field name would not be valid to define network rules in a Kubernetes NetworkPolicy object?
-
❏ A. podSelector
-
❏ B. ingress
-
❏ C. replicaCount
-
❏ D. egress
Your team operates one Kubernetes cluster that runs workloads for multiple groups across the company and you need to separate user access and workloads so teams do not interfere with one another. Which Kubernetes feature can you use to provide simple and secure isolation?
-
❏ A. Ingress controllers
-
❏ B. Pods
-
❏ C. Kubernetes Namespaces
-
❏ D. NetworkPolicy
You operate a Kubernetes cluster that runs separate development testing and production environments and you need each environment to receive distinct ConfigMaps while keeping Pod templates unchanged. Which Kubernetes features would you use to accomplish that? (Choose 2)
-
❏ A. Environment variables
-
❏ B. Config Connector
-
❏ C. Helm chart templates with values
-
❏ D. Dynamic admission controllers
-
❏ E. Kustomize overlays
In a Kubernetes installation what responsibility does the Controller Manager primarily carry out?
-
❏ A. To expose the Kubernetes API to external clients and handle API requests
-
❏ B. To decide which Nodes should run Pods based on resource requirements and constraints
-
❏ C. To run and manage the lifecycle and reconciliation loops of the various controllers running in the cluster
-
❏ D. To coordinate interactions with cloud platforms and manage external cloud resources on behalf of the cluster
NexCart is developing a cloud native system to process incoming customer purchases in near real time and separate microservices handle payment validation inventory adjustments and shipment notifications which are triggered by events. You want a serverless approach that scales automatically with demand and runs code only when events occur. Which serverless solution best fits this architecture?
-
❏ A. Google Cloud Run
-
❏ B. AWS Lambda or Azure Functions
-
❏ C. Kubernetes Deployments with ReplicaSets
-
❏ D. Long running containers in a Kubernetes cluster
Which statement correctly describes the roles of liveness and readiness probes in Kubernetes?
-
❏ A. Liveness probes run only once when a Pod initializes while readiness probes run periodically
-
❏ B. Liveness probes trigger the kubelet to restart a container when they fail while readiness probes mark whether the Pod is part of a Service’s endpoints and therefore accepts traffic
-
❏ C. Both liveness and readiness probes are intended to restart a Pod when they detect a problem
-
❏ D. Liveness probes prevent a Pod from receiving traffic while readiness probes cause the kubelet to restart containers
Your team at a fintech startup is moving toward cloud native infrastructure and needs a single solution for collecting and analyzing logs from distributed services. What is the recommended approach to build a unified logging pipeline for cloud native applications?
-
❏ A. Capture request and response logs at the API gateway and forward them to a central store
-
❏ B. Deploy a log collector such as Fluent Bit or Fluentd to forward application logs to a centralized log store
-
❏ C. Write logs from each microservice directly to a shared persistent volume mounted across pods
-
❏ D. Use the native logging service offered by each cloud provider where workloads run
You operate a Kubernetes cluster that uses a four node etcd ensemble and one etcd member failed and you replaced it with a fresh instance yet the ensemble shows inconsistent data across members. What is the most likely cause?
-
❏ A. The new member was not added to the cluster using etcdctl member add
-
❏ B. You did not update the –initial-cluster setting in the etcd startup options to include the new member
-
❏ C. The etcd data directory on the replacement node has incorrect filesystem permissions
-
❏ D. The kube-scheduler is not aware of the new etcd member
What is the main role of a service account in a Kubernetes cluster?
-
❏ A. Define network traffic rules for pods and services
-
❏ B. Use Google Cloud IAM to manage external user access
-
❏ C. Encrypt communication between the control plane and worker nodes
-
❏ D. Authenticate and authorize workloads inside pods to access the Kubernetes API
Which file format is typically used to author the manifest that defines a Kubernetes Pod for deployment?
-
❏ A. JSON
-
❏ B. HCL
-
❏ C. XML
-
❏ D. YAML
Within the Kubernetes community how do Special Interest Groups and Working Groups differ in their focus and responsibilities? (Choose 2)
-
❏ A. Special Interest Groups concentrate on particular technical components like networking storage and scheduling
-
❏ B. A Special Interest Group is primarily a documentation resource that publishes procedural guides for contributors
-
❏ C. Working Groups coordinate initiatives that span multiple Special Interest Groups or that affect the entire project
-
❏ D. Working Groups maintain code and manage day to day merges for a single component
Your team at StellarApps manages a Kubernetes cluster and a mission critical pod occasionally crashes due to an intermittent bug. The application should be restarted automatically when it fails but it must not restart after the container exits successfully. Which restart policy will restart the pod only when it fails and not after a successful completion?
-
❏ A. Set restartPolicy to Always
-
❏ B. Set restartPolicy to Never
-
❏ C. Set restartPolicy to OnFailure
-
❏ D. Use a completion only restart policy
Which Kubernetes resource should you use to declare access rules for API objects that are confined to a single namespace?
-
❏ A. ClusterRole
-
❏ B. RoleBinding
-
❏ C. ServiceAccount
-
❏ D. Role
Which declarative GitOps tool was first created by a financial software firm and is a graduated CNCF project?
-
❏ A. Spinnaker
-
❏ B. Flux
-
❏ C. Kubernetes
-
❏ D. Argo
All exam questions come from my KCNA Udemy Course and certificationexams.pro
Which container runtime does Kubernetes support out of the box and which has traditionally been the default implementation?
-
❏ A. CRI-O
-
❏ B. LXC
-
❏ C. containerd runtime
-
❏ D. Docker Engine
In a microservices platform that uses serverless functions, which characteristic is generally not associated with a Function as a Service offering?
-
❏ A. Automatic scaling
-
❏ B. Stateful behavior
-
❏ C. Short lived execution times
-
❏ D. Event driven invocation
Your Kubernetes cluster hosts several squads at NovaApps that each develop separate microservices. You need to enforce per-team resource limits while allowing teams to temporarily overcommit resources during traffic spikes. What two methods will best accomplish this? (Choose 2)
-
❏ A. Horizontal Pod Autoscaler using custom metrics
-
❏ B. Create ResourceQuota objects with hard CPU and memory limits per namespace
-
❏ C. GKE Autopilot
-
❏ D. Apply LimitRange to set default requests and permissive limits for containers in each namespace
-
❏ E. Enable Vertical Pod Autoscaler
At a medium sized company called Mariner Tech you are investigating high end to end response times in a Kubernetes hosted microservice platform. Which of the following checks is least likely to help identify the root cause of request latency?
-
❏ A. Measuring network round trip times between service endpoints
-
❏ B. Monitoring load on the Kubernetes API server
-
❏ C. Inspecting pod initialization and readiness durations
-
❏ D. Reviewing the refresh frequency of the client side interface
Which Kubernetes component acts as a node level agent on each worker machine and is responsible for ensuring containers described in Pod specifications are started and monitored?
-
❏ A. etcd
-
❏ B. docker
-
❏ C. kubectl
-
❏ D. kubelet
A team at Marlin Apps is deploying a multi component web platform to a Kubernetes cluster and they want a single tool to simplify installation upgrades and ongoing management for the entire application. Which tool would you use to make deploying and maintaining the platform easier?
-
❏ A. gcloud
-
❏ B. kube-proxy
-
❏ C. Helm
-
❏ D. kubectl
How do taints applied to nodes interact with pod tolerations when the Kubernetes scheduler decides which nodes can accept a pod?
-
❏ A. Taints act like node label selectors and tolerations act like pod label selectors
-
❏ B. Taints on nodes define resource limits and tolerations on pods declare resource requests
-
❏ C. Taints on nodes prevent pods without matching tolerations from being scheduled and tolerations permit pods to be placed on nodes that have those taints
-
❏ D. GKE node pools decide where pods run and tolerations only apply to node pool membership
What principles help you design a fault tolerant application that can recover itself and keep its intended state over time? (Choose 4)
-
❏ A. Rely on manual configuration changes and one off fixes
-
❏ B. Define the application as code to express the desired state
-
❏ C. Continuously observe application health and performance
-
❏ D. Use a reconciliation loop to detect drift and apply corrective actions
-
❏ E. Automatically adjust resources based on traffic using autoscaling
A platform team at Nimbus Cloud is adding custom resources to their Kubernetes cluster by using the API aggregation layer. Which API server components participate in implementing the aggregation layer? (Choose 2)
-
❏ A. cloud-controller-manager
-
❏ B. kube-controller-manager
-
❏ C. kube-apiserver
-
❏ D. kube-scheduler
-
❏ E. extension-apiserver
In a Kubernetes Deployment manifest for a new microservice at NimbusTech which Deployment field allows you to limit how many Pods may be unavailable at a time during a rolling update?
-
❏ A. replicas field
-
❏ B. maxSurge field
-
❏ C. strategy.type field
-
❏ D. maxUnavailable field
All exam questions come from my KCNA Udemy Course and certificationexams.pro
A fintech startup called Borealis Commerce is deciding between push based and pull based deployment models for its Kubernetes clusters. What is the main difference between these two deployment styles?
-
❏ A. Pull based deployments run an in cluster agent that fetches desired state from a Git repository and applies changes locally
-
❏ B. Push based deployments require an external CI pipeline to connect to the cluster and push updates which may require storing cluster credentials externally
-
❏ C. Pull based approaches always avoid exposing credentials to any external system so they are intrinsically more secure
-
❏ D. Cloud Build
Which situations could cause a Pod to be evicted because of resource shortages on a node? (Choose 2)
-
❏ A. The node has run out of ephemeral storage and the kubelet evicts Pods due to disk pressure
-
❏ B. The node is experiencing memory pressure and the container has hit its memory limit
-
❏ C. The Pod declares no resource requests or limits
-
❏ D. The Pod requests more CPU than any node can provide at scheduling time
-
❏ E. The Pod sets a CPU limit that is lower than its CPU request
What is the full name represented by the CNCF initialism?
-
❏ A. Anthos
-
❏ B. Cloud Native Computing Foundation
-
❏ C. Continuous Network Connectivity Framework
-
❏ D. Cloud Native Compliance Framework
In a Kubernetes cluster which component is chiefly responsible for allowing pods on separate nodes to reach one another?
-
❏ A. kube-proxy
-
❏ B. Google Cloud VPC
-
❏ C. Container Network Interface plugins
-
❏ D. kube-scheduler
When an exporter stops sending a time series to a Prometheus server how does Prometheus treat the now stale metric?
-
❏ A. It marks the series with a tombstone and waits for retention policies to drop it
-
❏ B. It represents the metric value as NaN to indicate staleness
-
❏ C. It retains the last observed value indefinitely
-
❏ D. It immediately removes the time series from the database
At NovaCloud Solutions a platform team needs a Pod to run only on nodes within a particular availability zone of their cluster. Which field in the Pod specification should they configure to target nodes in that specific availability zone?
-
❏ A. affinity
-
❏ B. annotations
-
❏ C. nodeSelector
-
❏ D. tolerations
A SRE at Solaris Labs needs to access the Kubernetes control plane from a laptop without making the API server publicly reachable. What is the main purpose of the kubectl proxy command?
-
❏ A. Open a web based dashboard for cluster monitoring
-
❏ B. Create a proxy to route external client traffic directly to Kubernetes services
-
❏ C. Start a local HTTP proxy that forwards requests to the cluster API server for secure local access
-
❏ D. Establish a direct port forward connection to a specific pod for troubleshooting
Within a Kubernetes cluster how do controllers ensure that resources remain aligned with the declared configuration in resource manifests?
-
❏ A. Controllers act mainly as passive observers that report status to the Kubernetes API server
-
❏ B. Controllers directly update manifest files in version control to enforce desired states
-
❏ C. Controllers implement a declarative reconciliation loop that compares the actual cluster state to the desired state and makes corrections as needed
-
❏ D. Controllers perform occasional polls and then execute manual fixes when they detect discrepancies
In a cloud native deployment why do engineers prefer designing services that do not hold local state?
-
❏ A. Cloud Run
-
❏ B. They reduce the reliance on persistent storage
-
❏ C. They can be scaled horizontally and recover with less complexity
-
❏ D. Troubleshooting services is simpler without local state
Nimbus Systems runs a Kubernetes environment and uses etcd as the cluster data store. What is the primary function of that etcd data store?
-
❏ A. Host container images in a registry
-
❏ B. Cloud Pub/Sub
-
❏ C. Store and synchronize cluster configuration and state data
-
❏ D. Manage network packet routing between pods
Certification Practice Exam Questions Answered
All exam questions come from my KCNA Udemy Course and certificationexams.pro
In a cloud native microservice environment at a payments startup what is the primary function of an anti corruption layer that sits between different service domains?
-
✓ C. To translate and isolate differing domain models
To translate and isolate differing domain models is correct. An anti corruption layer sits between service domains to translate concepts and data from one domain into the internal domain and to isolate your core model from the design and terminology used by other domains.
The anti corruption layer implements adapters and translators that map external contracts and structures into the internal domain model and it enforces boundaries so that changes in one domain do not leak into another. This preserves the integrity of your bounded context and keeps domain logic clean and consistent across microservices.
Apigee is incorrect. Apigee is an API management product that provides gateways, routing, analytics and policy enforcement and it is not the conceptual pattern for translating and isolating differing domain models.
To block SQL injection attacks is incorrect. Preventing SQL injection is a security responsibility handled by input sanitization, parameterized queries and web application firewalls and it is not the primary purpose of an anti corruption layer.
To validate that incoming data matches a schema is incorrect. Schema validation is a contract or data quality concern and it can be a part of an anti corruption layer but it does not capture the primary role which is semantic mapping and isolation between differing domain models.
Look for answers that mention model mapping or bounded context separation when identifying an anti corruption layer and remember that security controls and schema validation are related but separate responsibilities.
An engineering group at Nimbus Systems is composing a NetworkPolicy manifest for their Kubernetes cluster and they need to declare allowed traffic. Which field name would not be valid to define network rules in a Kubernetes NetworkPolicy object?
-
✓ C. replicaCount
The correct option is replicaCount.
replicaCount is not a valid field in a Kubernetes NetworkPolicy manifest because NetworkPolicy objects describe which traffic is allowed to and from selected pods and they do not include workload scaling parameters. Replica counts belong to workload controllers such as Deployments or ReplicaSets and not to network policy resources.
podSelector is valid because NetworkPolicy uses a podSelector to choose the pods the policy applies to and it is a core part of the NetworkPolicy spec.
ingress is valid because NetworkPolicy can list ingress rules to declare allowed inbound traffic to the selected pods.
egress is valid because NetworkPolicy can list egress rules to declare allowed outbound traffic from the selected pods.
When you see fields that control scaling or replica numbers think of workload manifests rather than NetworkPolicy and focus on podSelector, ingress, and egress for questions about network policies.
Your team operates one Kubernetes cluster that runs workloads for multiple groups across the company and you need to separate user access and workloads so teams do not interfere with one another. Which Kubernetes feature can you use to provide simple and secure isolation?
-
✓ C. Kubernetes Namespaces
The correct option is Kubernetes Namespaces.
Kubernetes Namespaces provide simple logical separation inside a single cluster so teams get their own scope for names and resources. You can create RoleBindings scoped to a namespace to limit who can create or modify objects and you can apply ResourceQuota and LimitRange to control and isolate resource consumption for each team.
Namespaces are a lightweight and secure way to separate workloads because they work with Kubernetes RBAC and admission controls. For network level isolation you can combine Namespaces with NetworkPolicy so network traffic is restricted between namespaces when needed.
Ingress controllers are responsible for routing external HTTP and HTTPS traffic into the cluster. They do not provide user or resource isolation across teams and so they are not the correct feature for separating workloads and access.
Pods are the smallest deployable units that run containers and they are not a mechanism for multi team isolation. Pods live inside namespaces and do not control user permissions or resource scoping across teams.
NetworkPolicy controls network traffic between pods and namespaces and it is useful for isolation. It does not by itself separate user access or resource ownership and you still need Namespaces and RBAC to provide simple and secure team separation.
When you need logical isolation think Namespaces and pair them with RBAC and ResourceQuota to enforce access and limits per team.
You operate a Kubernetes cluster that runs separate development testing and production environments and you need each environment to receive distinct ConfigMaps while keeping Pod templates unchanged. Which Kubernetes features would you use to accomplish that? (Choose 2)
-
✓ C. Helm chart templates with values
-
✓ E. Kustomize overlays
The correct options are Helm chart templates with values and Kustomize overlays.
Using Helm chart templates with values lets you keep the same Pod and resource templates while supplying different values files for development testing and production. You can store environment specific values files that produce distinct ConfigMaps at install or upgrade time without modifying the templates themselves.
Using Kustomize overlays lets you define a common base and then create overlays for each environment that add or patch ConfigMaps. Overlays allow you to produce environment specific ConfigMaps while leaving the base manifests and Pod templates unchanged.
The option Environment variables is incorrect because environment variables describe how Pods consume configuration but they do not provide a mechanism to generate or select different ConfigMaps per environment without changing manifests or using a templating or overlay tool.
The option Config Connector is incorrect because it is a Google Cloud tool for managing cloud resources via Kubernetes style CRDs and it does not provide the templating or overlay functionality used to supply different ConfigMaps per environment.
The option Dynamic admission controllers is incorrect because although admission webhooks can mutate or inject configuration at admission time this is an advanced and runtime mutation approach rather than the declarative templating and overlay mechanism the question asks for. The standard, simpler solutions for environment specific ConfigMaps are Helm values and Kustomize overlays.
When a question asks about different configuration per environment look for answers that separate a shared base from environment specific data and prefer tools that use values files or overlays.
In a Kubernetes installation what responsibility does the Controller Manager primarily carry out?
-
✓ C. To run and manage the lifecycle and reconciliation loops of the various controllers running in the cluster
The correct answer is To run and manage the lifecycle and reconciliation loops of the various controllers running in the cluster.
The kube-controller-manager hosts a set of controllers that continuously watch the API server and take actions to drive the current cluster state toward the desired state. Each controller implements a reconciliation loop that detects drift and creates or updates resources to fix it. The controller manager coordinates these controller processes and provides leader election and shared utilities when running in a highly available configuration.
To expose the Kubernetes API to external clients and handle API requests is incorrect because that responsibility belongs to the kube-apiserver which serves the REST API and handles authentication and authorization.
To decide which Nodes should run Pods based on resource requirements and constraints is incorrect because scheduling decisions are made by the kube-scheduler which selects target nodes for new Pods based on constraints and resource availability.
To coordinate interactions with cloud platforms and manage external cloud resources on behalf of the cluster is incorrect because cloud provider integrations are handled by the cloud-controller-manager or cloud provider specific controllers which run separately from the main controller manager in modern deployments.
When you read component roles focus on the core action word. The controller-manager runs reconciliation loops. The kube-apiserver serves the API and the kube-scheduler assigns Pods to nodes.
NexCart is developing a cloud native system to process incoming customer purchases in near real time and separate microservices handle payment validation inventory adjustments and shipment notifications which are triggered by events. You want a serverless approach that scales automatically with demand and runs code only when events occur. Which serverless solution best fits this architecture?
-
✓ B. AWS Lambda or Azure Functions
The correct answer is AWS Lambda or Azure Functions.
Serverless function platforms are designed for event driven workloads and they scale automatically to meet demand while you only pay for actual execution time. This makes them a natural fit for a near real time purchase processing pipeline where microservices are triggered by events and code should run only when those events occur.
Google Cloud Run is not the best choice here because it runs containers rather than lightweight functions and it is typically used for HTTP driven or containerized workloads. While Cloud Run can handle event triggers it generally has higher startup overhead compared to function platforms and it is not the minimal function execution model that the question emphasizes.
Kubernetes Deployments with ReplicaSets are not a serverless solution because they require you to manage the cluster and the pods are typically kept running unless you add autoscaling. This approach does not provide the automatic scale to zero or the per invocation billing model that serverless functions offer.
Long running containers in a Kubernetes cluster are also wrong because they keep containers running continuously and they do not run code only when events occur. They require constant resource allocation and do not naturally provide the event driven, pay per execution behavior sought in this scenario.
When a question emphasizes event driven and running code only when events occur look for answers that describe function style serverless offerings rather than always on containers.
Which statement correctly describes the roles of liveness and readiness probes in Kubernetes?
-
✓ B. Liveness probes trigger the kubelet to restart a container when they fail while readiness probes mark whether the Pod is part of a Service’s endpoints and therefore accepts traffic
Liveness probes trigger the kubelet to restart a container when they fail while readiness probes mark whether the Pod is part of a Service’s endpoints and therefore accepts traffic is correct.
Liveness probes are used by the kubelet to detect when a container is unhealthy and to restart that container to attempt recovery. Readiness probes are used to signal whether a Pod should be included in a Service’s endpoints so that only Pods that are ready receive traffic.
Liveness probes run only once when a Pod initializes while readiness probes run periodically is incorrect because probes run periodically after any configured initial delay and liveness checks are not one time events.
Both liveness and readiness probes are intended to restart a Pod when they detect a problem is incorrect because readiness probes do not trigger restarts and they only control traffic routing, while liveness probes can cause the kubelet to restart containers.
Liveness probes prevent a Pod from receiving traffic while readiness probes cause the kubelet to restart containers is incorrect because the behaviors are reversed. Readiness controls whether a Pod receives traffic and liveness can lead to container restarts.
When you see questions about probes remember that readiness controls traffic routing and liveness tells the kubelet when to restart unhealthy containers.
Your team at a fintech startup is moving toward cloud native infrastructure and needs a single solution for collecting and analyzing logs from distributed services. What is the recommended approach to build a unified logging pipeline for cloud native applications?
-
✓ B. Deploy a log collector such as Fluent Bit or Fluentd to forward application logs to a centralized log store
The correct option is Deploy a log collector such as Fluent Bit or Fluentd to forward application logs to a centralized log store.
Choosing Deploy a log collector such as Fluent Bit or Fluentd to forward application logs to a centralized log store gives you a consistent, resilient pipeline that fits cloud native patterns. Collectors are typically deployed as DaemonSets so they run on every node and capture container stdout and stderr without changing application code. They can parse and enrich structured logs, buffer and retry on network failures, and forward to many backends for search and analysis.
Centralizing logs with this approach makes it easier to correlate traces and metrics and to apply uniform retention and access controls. It also supports multi-cluster and hybrid environments since the collectors act as a standard ingestion layer that can forward to Elasticsearch, Loki, managed logging services, or other destinations.
Capture request and response logs at the API gateway and forward them to a central store is not sufficient because gateway logs only show ingress requests and responses. That option misses internal service logs and background processing, and it cannot replace full application level logging for troubleshooting.
Write logs from each microservice directly to a shared persistent volume mounted across pods is incorrect because shared volumes do not work reliably as a distributed logging bus. Shared volumes introduce concurrency, performance, and portability problems and they break the principle of immutable, ephemeral containers.
Use the native logging service offered by each cloud provider where workloads run is not the best single solution because it fragments your pipeline across vendors. Using provider native services can lead to vendor lock in and inconsistent features across environments, and it does not provide a single, portable collection layer for multi-cloud or hybrid deployments.
When an answer mentions lightweight collectors like Fluent Bit or a centralized forwarding pipeline, prefer it. Think about capturing stdout from containers with a DaemonSet and forwarding to a single store for correlation and analysis.
You operate a Kubernetes cluster that uses a four node etcd ensemble and one etcd member failed and you replaced it with a fresh instance yet the ensemble shows inconsistent data across members. What is the most likely cause?
-
✓ B. You did not update the –initial-cluster setting in the etcd startup options to include the new member
The correct answer is You did not update the –initial-cluster setting in the etcd startup options to include the new member.
When you replace a failed etcd node and start it with an empty data directory the node will consult its bootstrap configuration to decide whether to join the existing ensemble or to form a new cluster. If the –initial-cluster value is not updated to reflect the current membership or the new node is not listed then the fresh instance can bootstrap separately and create a split brain. That leads to inconsistent data across members because two different clusters will accept different writes.
The new member was not added to the cluster using etcdctl member add is not the best answer because although using etcdctl member add is the recommended way to register a new member it is a mechanism to generate the correct startup parameters. The direct cause of the symptom described is a mismatched bootstrap configuration like an incorrect –initial-cluster setting rather than the mere omission of running the command.
The etcd data directory on the replacement node has incorrect filesystem permissions is unlikely to produce the described symptom because permission problems normally prevent the member from starting or produce clear I O errors in the logs. That will not cause the rest of the ensemble to diverge in their data.
The kube-scheduler is not aware of the new etcd member is incorrect because the kube-scheduler is a Kubernetes control plane component that schedules pods and it does not participate in etcd clustering. Scheduler awareness does not affect etcd member consistency.
When bringing up a fresh etcd member with an empty data directory always verify the –initial-cluster or use etcdctl member add and follow its startup instructions so the node joins the existing ensemble instead of bootstrapping a new one.
All exam questions come from my KCNA Udemy Course and certificationexams.pro
What is the main role of a service account in a Kubernetes cluster?
-
✓ D. Authenticate and authorize workloads inside pods to access the Kubernetes API
The correct answer is Authenticate and authorize workloads inside pods to access the Kubernetes API.
A service account is a Kubernetes identity that represents processes running in pods. It provides credentials such as tokens that pods use to authenticate to the Kubernetes API and it works with Role Based Access Control to determine what the workload is authorized to do.
Define network traffic rules for pods and services is incorrect because network policies and service resources control traffic and not service accounts. Service accounts do not configure network rules.
Use Google Cloud IAM to manage external user access is incorrect because that describes cloud provider IAM and external user management. Kubernetes service accounts are cluster identities and are separate from Google Cloud IAM service accounts even though they can be integrated.
Encrypt communication between the control plane and worker nodes is incorrect because encryption and TLS between components are handled by certificates and API server and kubelet configuration. Service accounts do not perform that encryption.
When a question mentions workloads in pods or access to the Kubernetes API favor answers about service accounts. If it mentions network controls or encryption instead look for answers related to NetworkPolicy or TLS.
Which file format is typically used to author the manifest that defines a Kubernetes Pod for deployment?
-
✓ D. YAML
YAML is the correct option.
YAML is the typical file format used to author Kubernetes Pod manifests because it is human readable and it maps directly to the same structured data that the API accepts as JSON. Most official examples and tutorials show manifests in YAML and kubectl commonly applies files written in YAML. YAML also supports comments which makes long manifests easier to document and maintain.
JSON is technically supported by the Kubernetes API and you can submit manifests as JSON. It is not the typical authoring format on exams and examples because it is less convenient to write by hand than YAML.
HCL is the HashiCorp Configuration Language used by Terraform and other HashiCorp tools. It is not used to author Kubernetes Pod manifests and kubectl does not accept HCL files as manifests.
XML is not a supported manifest format for Kubernetes Pod definitions. The Kubernetes API expects structured data expressed as YAML or JSON and XML is not used in normal workflows.
When you see a question about Kubernetes manifests pick YAML because most examples and exam scenarios use it. Remember that the API can accept JSON so do not confuse what is typical with what is technically possible.
Within the Kubernetes community how do Special Interest Groups and Working Groups differ in their focus and responsibilities? (Choose 2)
-
✓ A. Special Interest Groups concentrate on particular technical components like networking storage and scheduling
-
✓ C. Working Groups coordinate initiatives that span multiple Special Interest Groups or that affect the entire project
The correct options are Special Interest Groups concentrate on particular technical components like networking storage and scheduling and Working Groups coordinate initiatives that span multiple Special Interest Groups or that affect the entire project.
Special Interest Groups concentrate on particular technical components like networking storage and scheduling are focused teams that organize contributors around a specific area of the code base or ecosystem. They drive the technical design API changes implementation and ongoing maintenance for that component and they coordinate SIG-level meetings and subproject work.
Working Groups coordinate initiatives that span multiple Special Interest Groups or that affect the entire project handle cross cutting efforts that require collaboration across multiple SIGs or that touch project wide processes. Working groups are often temporary or chartered to address a broad initiative such as release tooling security audits or multi SIG feature work and they emphasize coordination rather than being the primary code owner for a single component.
A Special Interest Group is primarily a documentation resource that publishes procedural guides for contributors is incorrect because SIGs are not limited to documentation. SIGs own technical design and implementation for specific areas and they may produce documentation as part of that work but documentation is not their sole or primary role.
Working Groups maintain code and manage day to day merges for a single component is incorrect because working groups focus on cross SIG coordination and project wide initiatives. Day to day code maintenance and merges for a single component are typically handled by the SIG that owns that component.
When you see a phrase like particular technical components think SIG and when you see span multiple SIGs or project wide think WG. Focus on whether the scope is component level or cross cutting.
Your team at StellarApps manages a Kubernetes cluster and a mission critical pod occasionally crashes due to an intermittent bug. The application should be restarted automatically when it fails but it must not restart after the container exits successfully. Which restart policy will restart the pod only when it fails and not after a successful completion?
-
✓ C. Set restartPolicy to OnFailure
Set restartPolicy to OnFailure is correct because this setting restarts the pod only when the container exits with a failure and it does not restart after a successful completion.
With Set restartPolicy to OnFailure the kubelet will attempt to restart containers that exit with a non zero status code. It will not restart a container that exits with status zero which is treated as a successful completion. This behavior matches the requirement to retry on crashes but not restart after normal exits.
Set restartPolicy to Always is incorrect because Set restartPolicy to Always causes containers to be restarted regardless of exit status including successful exits so it would restart after a normal completion.
Set restartPolicy to Never is incorrect because Set restartPolicy to Never prevents any restarts so a crashed container would not be restarted.
Use a completion only restart policy is incorrect because there is no built in “completion only” restartPolicy in Kubernetes. The valid restartPolicy values are Always, OnFailure, and Never, and workloads that run to completion typically use Jobs with appropriate completion settings and a compatible restartPolicy.
Remember that restartPolicy is defined in the Pod spec and valid values are Always, OnFailure, and Never. Match the desired restart behavior to the exact policy name when answering exam questions.
Which Kubernetes resource should you use to declare access rules for API objects that are confined to a single namespace?
-
✓ D. Role
The correct answer is Role.
Role is a namespaced RBAC object that declares a set of permissions for API resources within a single namespace. A Role contains rules that list verbs and resources and it is the resource you use when you want to limit permissions to that namespace only. To grant the Role to users or service accounts you create a RoleBinding that references the Role.
ClusterRole is cluster scoped and it is intended for cluster wide permissions. Although a ClusterRole can be bound into namespaces, it is not the resource that defines rules confined to a single namespace.
RoleBinding does not declare permissions. It attaches a Role or a ClusterRole to subjects so it is used to grant access rather than to define the rules themselves.
ServiceAccount is an identity for pods and workloads and it does not declare access rules. It is used as a subject that can be granted roles but it is not the RBAC object that specifies permissions.
When a question asks about permissions limited to one namespace think Role and remember that a RoleBinding is used to grant that role to subjects.
Which declarative GitOps tool was first created by a financial software firm and is a graduated CNCF project?
-
✓ D. Argo
The correct option is Argo.
Argo is a Kubernetes native declarative GitOps family of tools that includes Argo CD for GitOps style continuous delivery. Argo CD was originally developed at Intuit which is a financial software company and the Argo project later joined the Cloud Native Computing Foundation and achieved graduated status, so it matches both parts of the question.
Spinnaker is a multi cloud continuous delivery platform that originated at Netflix and not a tool that was first created by a financial software firm, so it is incorrect.
Flux is indeed a declarative GitOps tool but it was created by Weaveworks rather than a financial software company, so it does not match the origin described in the question.
Kubernetes is a container orchestration system originally developed by Google and it is not a declarative GitOps delivery tool, so it is not the correct answer.
When a question asks about a project origin check the company that originally developed it and the CNCF project status. Look for the vendor or project documentation to confirm who created the tool and whether it is graduated.
Which container runtime does Kubernetes support out of the box and which has traditionally been the default implementation?
-
✓ D. Docker Engine
The correct option is Docker Engine.
Kubernetes historically used Docker Engine as the default container runtime because the kubelet included a dockershim that spoke the Docker Engine API to create and manage containers. That made it the de facto runtime on the majority of clusters for many releases.
Over time the project moved toward the Container Runtime Interface and the dockershim was deprecated and eventually removed in newer Kubernetes releases. This change means that while Docker Engine is the traditional answer, newer exams often focus on CRI compatible runtimes instead.
CRI-O is a CRI implementation designed to work with Kubernetes and it is a supported and popular alternative. It was not the traditional default however so it is not the correct choice for a question about the historical default.
LXC is a Linux container technology but it is not a standard Kubernetes container runtime and Kubernetes does not use LXC as its default runtime. That makes LXC incorrect for this question.
containerd runtime is a modern, CRI compatible runtime and it is widely used today and often recommended for new clusters. It was not the traditional default historically so containerd runtime is not the correct answer to a question asking which runtime was the traditional default.
Pay attention to the wording about historical default versus current or recommended runtimes. If the question asks about the traditional default choose Docker Engine. If it asks about modern or supported runtimes look for answers mentioning containerd or CRI-O.
In a microservices platform that uses serverless functions, which characteristic is generally not associated with a Function as a Service offering?
-
✓ B. Stateful behavior
The correct option is Stateful behavior. Function as a Service offerings are generally designed to be stateless and not to retain in memory state across separate invocations.
Serverless functions are usually short lived and ephemeral so they are started and stopped frequently and scaled out concurrently. Because of that functions rely on external systems such as databases caches or object storage to hold persistent data and to share state across invocations. Designing functions to be stateless makes scaling simple and prevents issues with consistency when many instances run at the same time.
Automatic scaling is not the correct choice because FaaS platforms commonly provide automatic scaling. Functions are typically scaled up and down in response to incoming requests or events without manual intervention.
Short lived execution times is not the correct choice because serverless functions are commonly expected to have short execution durations and many providers enforce or encourage time limits for function runs.
Event driven invocation is not the correct choice because FaaS is often triggered by events such as HTTP requests messages from queues or changes in storage and that event driven model is a core characteristic of serverless functions.
When a question asks what is not associated with FaaS think about persistence and lifetime. Remember that serverless functions are ephemeral and designed to be stateless so any long lived or in memory state is a sign that the choice is incorrect.
Your Kubernetes cluster hosts several squads at NovaApps that each develop separate microservices. You need to enforce per-team resource limits while allowing teams to temporarily overcommit resources during traffic spikes. What two methods will best accomplish this? (Choose 2)
-
✓ B. Create ResourceQuota objects with hard CPU and memory limits per namespace
-
✓ D. Apply LimitRange to set default requests and permissive limits for containers in each namespace
Create ResourceQuota objects with hard CPU and memory limits per namespace and Apply LimitRange to set default requests and permissive limits for containers in each namespace are the correct choices.
Create ResourceQuota objects with hard CPU and memory limits per namespace enforces a namespace level cap on total CPU and memory consumption so each team has a hard budget and cannot exceed it over the long term. This is the right mechanism to guarantee per team limits across all pods in a namespace.
Apply LimitRange to set default requests and permissive limits for containers in each namespace provides sensible defaults for requests so pods schedule predictably while allowing permissive limits that let teams temporarily overcommit during spikes. Combined with a ResourceQuota this approach balances scheduling behavior with an overall cap.
Horizontal Pod Autoscaler using custom metrics is about scaling replica counts based on metrics and it does not provide namespace level caps or per team resource budgets. It helps with handling load but it is not a mechanism to enforce team quotas.
GKE Autopilot is a managed cluster mode that changes how resources and requests are handled at the node and control plane level and it is not a configuration method to implement per namespace hard quotas plus permissive defaults. Choosing Autopilot changes the operational model rather than directly providing the requested quota behavior.
Enable Vertical Pod Autoscaler adjusts container resource requests over time to match observed usage and it can conflict with horizontal scaling strategies. It is not a substitute for namespace level ResourceQuota and does not enforce hard per team limits.
Use ResourceQuota for hard, per namespace limits and pair it with LimitRange to set defaults that allow short lived overcommit while keeping overall budgets enforced.
At a medium sized company called Mariner Tech you are investigating high end to end response times in a Kubernetes hosted microservice platform. Which of the following checks is least likely to help identify the root cause of request latency?
-
✓ B. Monitoring load on the Kubernetes API server
The correct answer is Monitoring load on the Kubernetes API server.
Monitoring load on the Kubernetes API server is the least likely to directly identify the root cause of end to end request latency because the API server is a control plane component and it does not sit in the data path for most service to service requests. Control plane load can affect scheduling and management operations but it rarely increases per request network or application processing times unless the control plane is so degraded that it causes node or pod churn.
Measuring network round trip times between service endpoints is not the right answer because network latency directly contributes to end to end response times. Measuring RTT between pods and services can reveal congestion, overlay issues, or misconfigured network policies that increase request latency.
Inspecting pod initialization and readiness durations is not the right answer because slow startups and delayed readiness can cause cold starts or queuing that raise perceived latency. Those timings point to resource pressure or image pull delays that impact responsiveness.
Reviewing the refresh frequency of the client side interface is not the right answer because client behavior can create extra load or make latency appear worse. A client polling too often or not handling caching properly can generate excess requests and amplify latency symptoms.
When a question asks about request latency think about whether the component sits in the data plane or the control plane. Prioritize checks of network, pod readiness, and client behavior before looking at control plane metrics.
All exam questions come from my KCNA Udemy Course and certificationexams.pro
Which Kubernetes component acts as a node level agent on each worker machine and is responsible for ensuring containers described in Pod specifications are started and monitored?
-
✓ D. kubelet
The correct answer is kubelet.
The kubelet runs on every worker node as the node level agent and it watches the API server for Pod specifications and ensures the containers described in those Pods are started, monitored, and reported back to the control plane. The kubelet talks to the node’s container runtime through the Container Runtime Interface and performs health checks and lifecycle management for containers on that node.
etcd is the cluster key value store that holds cluster state and configuration and it does not act as a node agent or start containers.
docker is a container runtime and not the Kubernetes node agent that monitors Pod specifications and manages container lifecycle. Note that the built in dockershim was removed in newer Kubernetes releases and runtimes now implement the CRI, so Docker is not the component responsible for the behavior described.
kubectl is a client command line tool used to interact with the API server and it does not run on worker nodes to manage or monitor containers.
When a question asks which component runs on each worker node and enforces Pod specs look for kubelet as the only component that both runs on every node and manages container lifecycles.
A team at Marlin Apps is deploying a multi component web platform to a Kubernetes cluster and they want a single tool to simplify installation upgrades and ongoing management for the entire application. Which tool would you use to make deploying and maintaining the platform easier?
-
✓ C. Helm
The correct answer is Helm.
Helm is a package manager for Kubernetes that packages multi component applications into charts and it lets teams install, upgrade, and manage the entire platform as a unit. Helm provides templating for manifests, dependency handling, release history, upgrades, and rollbacks which simplifies ongoing management of complex applications.
gcloud is the Google Cloud command line tool for managing cloud resources and it is not a Kubernetes package manager so it does not handle chart based installs or release management for multi component apps.
kube-proxy is a cluster networking component that implements service load balancing and network rules on nodes and it is not used to deploy, package, or manage application releases.
kubectl is the Kubernetes command line for interacting with the API and it can apply manifests and perform imperative operations but it does not provide chart packaging, templating, dependency management, or release upgrade and rollback features that Helm offers.
When a question mentions installing, upgrading, or managing a multi component application look for words like install, upgrade, rollback, or chart. Those clues usually point to a package manager such as Helm.
How do taints applied to nodes interact with pod tolerations when the Kubernetes scheduler decides which nodes can accept a pod?
-
✓ C. Taints on nodes prevent pods without matching tolerations from being scheduled and tolerations permit pods to be placed on nodes that have those taints
The correct answer is Taints on nodes prevent pods without matching tolerations from being scheduled and tolerations permit pods to be placed on nodes that have those taints.
A taint applied to a node marks that node so the scheduler will not place pods that do not explicitly tolerate the taint. Taints are defined with a key and optional value and an effect such as NoSchedule, PreferNoSchedule, or NoExecute.
A pod toleration is a declaration in the pod spec that it can tolerate a taint. Tolerations do not force the scheduler to place a pod on a tainted node and they do not bypass other scheduling constraints. They only allow the pod to be considered for nodes that have matching taints.
The NoExecute effect will evict existing pods that do not tolerate the taint and it also prevents new pods without a matching toleration from being scheduled on that node.
Taints act like node label selectors and tolerations act like pod label selectors is incorrect because taints and tolerations are not label selectors. Node selectors and node affinity use labels to select nodes and that is a different mechanism than taints and tolerations.
Taints on nodes define resource limits and tolerations on pods declare resource requests is incorrect because resource requests and limits are defined in the pod spec resources section and taints do not set resource capacities or requests.
GKE node pools decide where pods run and tolerations only apply to node pool membership is incorrect because node pools are a provider grouping of nodes and do not replace taints and tolerations. Tolerations apply to taints on nodes regardless of node pool membership and you can use labels or node selectors to target specific pools.
When you see taints and tolerations think of an allow or block rule. Use kubectl describe node to view node taints and inspect the pod spec for tolerations when troubleshooting scheduling problems.
What principles help you design a fault tolerant application that can recover itself and keep its intended state over time? (Choose 4)
-
✓ B. Define the application as code to express the desired state
-
✓ C. Continuously observe application health and performance
-
✓ D. Use a reconciliation loop to detect drift and apply corrective actions
-
✓ E. Automatically adjust resources based on traffic using autoscaling
The correct answers are Define the application as code to express the desired state, Continuously observe application health and performance, Use a reconciliation loop to detect drift and apply corrective actions, and Automatically adjust resources based on traffic using autoscaling.
Define the application as code to express the desired state is important because declaring the desired state makes the system auditable and reproducible and it lets controllers and automation know what to maintain over time. Continuously observe application health and performance is required so that the system can detect failures and performance regressions and provide the signals needed for corrective actions. Use a reconciliation loop to detect drift and apply corrective actions describes the controller pattern that compares actual state to desired state and then takes actions to return the system to the desired state. Automatically adjust resources based on traffic using autoscaling ensures the application can handle variable load and recover capacity quickly without manual intervention.
Rely on manual configuration changes and one off fixes is incorrect because manual changes are slow and error prone and they do not provide a reliable way to maintain the intended state over time or to recover automatically from faults.
When you see choices that mention desired state, reconciliation, observability, or autoscaling prefer them over answers that rely on manual work.
A platform team at Nimbus Cloud is adding custom resources to their Kubernetes cluster by using the API aggregation layer. Which API server components participate in implementing the aggregation layer? (Choose 2)
-
✓ C. kube-apiserver
-
✓ E. extension-apiserver
The correct options are kube-apiserver and extension-apiserver.
The kube-apiserver implements the aggregation layer and acts as the aggregator that accepts requests for aggregated API groups and proxies them to registered backend servers using APIService objects. It also handles cluster wide concerns such as authentication and authorization so that aggregated servers can focus on serving their API groups.
The extension-apiserver is an example of an aggregated API server that implements additional API groups or custom resources and registers with the aggregator via APIService records. It typically runs as a separate process or pod and the aggregator routes matching API calls to it.
cloud-controller-manager is responsible for cloud provider specific controllers and it does not act as an API server so it is not part of the aggregation layer.
kube-controller-manager runs built in controllers that reconcile cluster state and it does not host or register API endpoints for aggregation.
kube-scheduler is responsible for placing pods onto nodes and it does not provide or register API endpoints so it does not participate in API aggregation.
When you encounter aggregation layer questions look for components that are actual API servers and that register via APIService. Remember that controllers and the scheduler manage cluster behavior and scheduling and they do not serve aggregated APIs.
In a Kubernetes Deployment manifest for a new microservice at NimbusTech which Deployment field allows you to limit how many Pods may be unavailable at a time during a rolling update?
-
✓ D. maxUnavailable field
maxUnavailable field is the correct option because it defines how many Pods may be unavailable at a time during a rolling update of a Deployment.
The maxUnavailable field lives under spec.strategy.rollingUpdate in the Deployment spec and it accepts either an absolute number or a percentage. Kubernetes uses that value to limit how many Pods can be terminated or be not ready while new Pods are being created so the update maintains the intended availability.
The replicas field is incorrect because it only sets the desired total number of Pods for the Deployment and it does not control rolling update availability or how many Pods may be unavailable during an update.
The maxSurge field is incorrect because it controls how many extra Pods may be created above the desired replicas during an update to speed rollout. It is complementary to the maxUnavailable field but it does not limit unavailable Pods.
The strategy.type field is incorrect because it selects the update strategy such as RollingUpdate or Recreate and it does not itself set counts for unavailable Pods. When RollingUpdate is selected the controller uses settings like the maxUnavailable field and the maxSurge field to control the update behavior.
When asked about how many pods can be down during an update remember that maxUnavailable controls allowed downtime and maxSurge controls extra temporary Pods created during the rollout.
A fintech startup called Borealis Commerce is deciding between push based and pull based deployment models for its Kubernetes clusters. What is the main difference between these two deployment styles?
-
✓ A. Pull based deployments run an in cluster agent that fetches desired state from a Git repository and applies changes locally
Pull based deployments run an in cluster agent that fetches desired state from a Git repository and applies changes locally is the correct option.
With the pull based deployments run an in cluster agent that fetches desired state from a Git repository and applies changes locally an agent such as Flux or Argo CD runs inside the cluster and continuously reconciles the cluster against the Git desired state. The agent performs the apply operations locally so external CI systems do not need direct access to the cluster and you do not have to distribute long lived cluster credentials from a central pipeline to every cluster.
Push based deployments require an external CI pipeline to connect to the cluster and push updates which may require storing cluster credentials externally is incorrect for this question because it focuses on an implementation detail rather than the concise distinguishing characteristic. While push pipelines often do connect and push changes, the key difference is where the reconciler runs and which side initiates the change.
Pull based approaches always avoid exposing credentials to any external system so they are intrinsically more secure is wrong because pull approaches reduce the need to distribute credentials but they do not automatically eliminate credential exposure. The in cluster agent still needs appropriate permissions and you must secure Git access and any secrets used by the agent to keep the system secure.
Cloud Build is incorrect because it names a specific CI product and does not describe the conceptual difference between push and pull deployment models.
Focus on where the reconciler runs when you see push versus pull. Remember that pull means an in cluster agent reconciles state and push means an external pipeline applies changes.
Which situations could cause a Pod to be evicted because of resource shortages on a node? (Choose 2)
-
✓ B. The node is experiencing memory pressure and the container has hit its memory limit
-
✓ C. The Pod declares no resource requests or limits
The correct options are The node is experiencing memory pressure and the container has hit its memory limit and The Pod declares no resource requests or limits.
The node is experiencing memory pressure and the container has hit its memory limit is correct because the kubelet can evict pods to relieve node memory pressure and containers that exceed their memory limits are subject to termination and eviction by the node.
The Pod declares no resource requests or limits is correct because a pod without requests or limits falls into the BestEffort quality of service class and BestEffort pods are the first candidates for eviction when node resources are scarce.
The node has run out of ephemeral storage and the kubelet evicts Pods due to disk pressure is incorrect in this question set. While disk pressure and ephemeral storage thresholds can cause evictions in Kubernetes, this option was marked incorrect for the scenarios asked here and it is not one of the expected answers.
The Pod requests more CPU than any node can provide at scheduling time is incorrect because that situation prevents the pod from being scheduled in the first place and does not cause an eviction on a node.
The Pod sets a CPU limit that is lower than its CPU request is incorrect because requests must not exceed limits and a pod with a request greater than its limit is invalid or will be rejected rather than being evicted for resource shortage on a node.
Remember that QoS class affects eviction order so pods with no requests are BestEffort and are evicted first under node pressure.
What is the full name represented by the CNCF initialism?
-
✓ B. Cloud Native Computing Foundation
The correct answer is Cloud Native Computing Foundation.
Cloud Native Computing Foundation is the nonprofit organization that supports and sustains the cloud native ecosystem. The initials C N C F map directly to the words Cloud Native Computing Foundation and the organization hosts and incubates open source projects such as Kubernetes, Prometheus, and Envoy which demonstrate its role in cloud native computing.
Anthos is a Google Cloud product for hybrid and multi cloud application management and it is not the expansion of the CNCF initialism.
Continuous Network Connectivity Framework is not a recognized name for the CNCF and it does not reflect the foundation and open source focus represented by the actual organization.
Cloud Native Compliance Framework sounds plausible but it is incorrect because the CNCF stands for Cloud Native Computing Foundation and not a compliance framework.
When expanding an initialism match each letter to the words in the options and eliminate choices that are product names or that use different key words. Focus on the exact words used in each option to avoid traps.
In a Kubernetes cluster which component is chiefly responsible for allowing pods on separate nodes to reach one another?
-
✓ C. Container Network Interface plugins
The correct option is Container Network Interface plugins.
The Container Network Interface plugins implement the pod network model and they configure the network interfaces, IP addresses, and routing that let pods communicate across nodes. CNI plugins are the pieces that create the overlay or route topology for pods and integrate with the kubelet to ensure each pod can be reached from other nodes.
The kube-proxy component programs service IPs and packet forwarding rules on each node and it handles Service traffic. It does not implement the underlying pod-to-pod network that provides cross node connectivity.
The Google Cloud VPC is the cloud provider network and it can carry traffic between nodes in a cluster, but it is not the Kubernetes component that configures pod interfaces or implements the pod networking model. Kubernetes still relies on a CNI plugin to set up pod networking.
The kube-scheduler assigns pods to nodes based on resource and constraint evaluation and it does not manage network configuration or enable pod-to-pod connectivity.
When a question asks who makes pods able to talk to each other think of the component that configures interfaces and routes. Remember that CNI plugins implement pod networking while kube-proxy only handles Service proxying.
All exam questions come from my KCNA Udemy Course and certificationexams.pro
When an exporter stops sending a time series to a Prometheus server how does Prometheus treat the now stale metric?
-
✓ B. It represents the metric value as NaN to indicate staleness
The correct answer is It represents the metric value as NaN to indicate staleness.
Prometheus emits a special sample that has the value NaN to mark that a series has become stale and to signal that the metric is effectively missing for that time range. The NaN marker ensures that graphs show a gap and that query operations treat the data point as absent instead of silently using an old value.
It marks the series with a tombstone and waits for retention policies to drop it is incorrect because tombstones are used when series are explicitly deleted through the deletion API and not for normal exporter stoppage. Automatic staleness does not create tombstones.
It retains the last observed value indefinitely is incorrect because Prometheus does not keep serving the last value when an exporter stops. Instead it inserts the NaN stale marker so that the series is treated as missing rather than presenting a stale constant value.
It immediately removes the time series from the database is incorrect because Prometheus does not instantly delete the series when an exporter disappears. The series metadata and recent samples are retained according to TSDB retention and compaction rules and deletion only occurs by retention expiry or explicit deletion.
When a question mentions Prometheus staleness remember that Prometheus uses NaN markers to indicate missing series and that queries and graphing treat those markers as absent data rather than valid numeric values.
At NovaCloud Solutions a platform team needs a Pod to run only on nodes within a particular availability zone of their cluster. Which field in the Pod specification should they configure to target nodes in that specific availability zone?
-
✓ C. nodeSelector
The correct option is nodeSelector.
nodeSelector is a simple map of key value pairs in the Pod spec that tells the scheduler to place the Pod only on nodes that have matching labels. To target a specific availability zone you set the zone label such as topology.kubernetes.io/zone as a key in the selector so the Pod will run only on nodes in that zone.
affinity is incorrect because affinity is a broader field that includes nodeAffinity and offers richer matching rules and preferences, but the question asks which Pod spec field to configure to directly target nodes in a specific zone and the simplest direct field is nodeSelector.
annotations are incorrect because annotations are metadata for storing non functional information and they do not influence the scheduler when choosing nodes.
tolerations are incorrect because tolerations allow Pods to tolerate node taints and they do not by themselves select nodes by label or zone. Tolerations work with taints but do not target availability zones.
When a question asks for a direct Pod spec field to pin pods to nodes by label look for nodeSelector for simple equality matches and prefer nodeAffinity when you need operators or weighted preferences.
A SRE at Solaris Labs needs to access the Kubernetes control plane from a laptop without making the API server publicly reachable. What is the main purpose of the kubectl proxy command?
-
✓ C. Start a local HTTP proxy that forwards requests to the cluster API server for secure local access
The correct option is Start a local HTTP proxy that forwards requests to the cluster API server for secure local access.
The Start a local HTTP proxy that forwards requests to the cluster API server for secure local access command runs a small HTTP server on the laptop and forwards requests to the Kubernetes API server so you can access the control plane without making the API publicly reachable. The proxy uses your kubeconfig context and API server authentication and authorization so local tools and web UIs can call the API securely through the local endpoint.
The Open a web based dashboard for cluster monitoring option is incorrect because kubectl proxy does not itself provide a dashboard though it can be used to access a dashboard that is running in the cluster.
The Create a proxy to route external client traffic directly to Kubernetes services option is incorrect because kubectl proxy only forwards HTTP requests to the API server and does not proxy arbitrary external traffic to in-cluster services. For exposing services you would use an ingress, LoadBalancer, NodePort, or a service proxy solution.
The Establish a direct port forward connection to a specific pod for troubleshooting option is incorrect because that describes kubectl port-forward which maps local ports to pod ports. Port forwarding targets individual pod ports and does not act as a general HTTP proxy to the API server.
When a question asks about accessing the API locally remember that kubectl proxy runs an HTTP server on localhost to forward API requests and keeps the API server private. Do not confuse this with kubectl port-forward which targets pod ports.
Within a Kubernetes cluster how do controllers ensure that resources remain aligned with the declared configuration in resource manifests?
-
✓ C. Controllers implement a declarative reconciliation loop that compares the actual cluster state to the desired state and makes corrections as needed
The correct option is Controllers implement a declarative reconciliation loop that compares the actual cluster state to the desired state and makes corrections as needed.
Kubernetes controllers continuously watch resources through the API server and run a reconciliation loop that detects drift between the declared manifests and the actual cluster state. When differences are found the controller issues API operations to create update or delete objects so the cluster converges to the declared state. This automated loop is the core of the declarative model and provides self healing behavior.
Controllers act mainly as passive observers that report status to the Kubernetes API server is incorrect because controllers do more than observe and report. They actively reconcile and take actions to bring the cluster toward the desired state.
Controllers directly update manifest files in version control to enforce desired states is incorrect because controllers operate against the Kubernetes API and do not modify your version control system. Source of truth in Git is usually managed by separate GitOps tools rather than in-cluster controllers themselves.
Controllers perform occasional polls and then execute manual fixes when they detect discrepancies is incorrect because controllers perform continuous reconciliation and apply automated corrections. They do not rely on manual intervention to enforce desired state.
When a question asks how Kubernetes keeps resources aligned look for the phrase reconciliation loop or desired vs actual state as those indicate controller driven reconciliation.
In a cloud native deployment why do engineers prefer designing services that do not hold local state?
-
✓ C. They can be scaled horizontally and recover with less complexity
The correct option is They can be scaled horizontally and recover with less complexity.
Designing services to avoid local state makes individual instances interchangeable which lets orchestration and autoscaling add or remove replicas without complex data synchronization. This is why They can be scaled horizontally and recover with less complexity is the best answer.
Cloud Run is incorrect because it names a platform rather than explaining a design benefit. The question asks why engineers prefer a stateless design and not which service can run that design.
They reduce the reliance on persistent storage is misleading because stateless services typically move state to external durable stores rather than removing the need for persistence. The design shifts where data is stored and does not inherently eliminate persistent storage.
Troubleshooting services is simpler without local state is not the primary reason and can be false in practice. Removing local state can improve reproducibility but debugging can still be complex and local state can sometimes aid diagnosis.
When you see questions about state and scaling choose answers that mention horizontal scaling or easier recovery rather than product names or vague claims about storage.
Nimbus Systems runs a Kubernetes environment and uses etcd as the cluster data store. What is the primary function of that etcd data store?
-
✓ C. Store and synchronize cluster configuration and state data
Store and synchronize cluster configuration and state data is correct. etcd is the distributed key value store that the Kubernetes control plane uses to persist and synchronize the desired state and the actual state of the cluster.
etcd is a strongly consistent, distributed key value database and the kube apiserver reads from and writes to it. The control plane stores objects such as nodes, pods, deployments, ConfigMaps and Secrets in etcd so that all control plane components can observe and reconcile cluster state.
Host container images in a registry is incorrect because container images are stored in container registries such as Docker Hub, Harbor or a cloud registry and not in etcd.
Cloud Pub/Sub is incorrect because that is a messaging service model and not the function of the Kubernetes cluster data store.
Manage network packet routing between pods is incorrect because pod networking and packet routing are handled by CNI plugins and kube proxy or other network components and not by etcd.
When a question asks about where Kubernetes keeps cluster state think of etcd and the kube apiserver together. Remember that image registries and networking are separate concerns from the cluster data store.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
