AZ-104 Exam Dumps and Azure Administrator Associate Braindumps
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
Microsoft AZ-104 Certification Exam Topics
Despite the title of this article, this is not an AZ-104 exam braindump in the traditional sense.
I do not believe in cheating.
Traditionally, the term braindump referred to someone taking an exam, memorizing the questions, and sharing them online for others to use.
That practice is unethical and violates the certification agreement. It offers no genuine learning or professional development.
This is not an Azure certification exam dump.
All of these questions come from my AZ-104 study materials and from the certificationexams.pro website, which offers hundreds of free AZ-104 practice questions.
Real AZ-104 Sample Questions
Each question has been carefully written to align with the official Microsoft Azure Administrator exam objectives. They reflect the tone, logic, and practical scenarios of real Azure administration tasks, but none are copied from the actual test.
Every question is designed to help you learn, reason, and study AZ-104 certification concepts such as identity management, RBAC, virtual networking, storage, compute, governance, and monitoring.
AZ-104 Administrator Practice Questions
If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real AZ-104 exam but also gain the foundational knowledge needed to work confidently as a Microsoft Azure administrator.
So if you want to call this your AZ-104 exam dump, that is up to you, but remember that every question here is built to teach the AZ-104 exam objectives, not to cheat.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AZ-104 Certification Sample Questions
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
If a client successfully authenticates to a Kubernetes API server but does not have permission to carry out the requested operation what response does the API server return?
-
❏ A. The request is placed in a queue for manual review by a cluster operator
-
❏ B. The server processes the call but only returns non sensitive public information
-
❏ C. The API server denies the operation and returns an HTTP “403 Forbidden” response
-
❏ D. The request is recorded in Cloud Audit Logs for later review
In system design what does the term scale out mean when discussing ways to increase system capacity?
-
❏ A. Merging several virtual servers into a single more powerful host to reduce operational overhead
-
❏ B. Refactoring application code to reduce resource usage and speed up execution
-
❏ C. Provisioning additional servers or instances so that traffic and workloads are distributed across multiple hosts
-
❏ D. Upgrading a single machine by increasing its CPU memory or storage to handle greater demand
A payments startup runs workloads on a managed Kubernetes platform and needs to ensure GPU heavy jobs are scheduled only on a designated set of nodes. What is the recommended cloud native approach to enforce this using node pools?
-
❏ A. Label pods with gpu=true and rely on the scheduler to match them to GPU nodes
-
❏ B. Provision a dedicated GPU node pool and apply node taints with matching pod tolerations
-
❏ C. Use node affinity only to guide GPU workloads to specific nodes
-
❏ D. Enable GKE node auto provisioning and install the Kubernetes device plugin for GPUs
A team at TidalApps is deploying a service that requires a common filesystem accessible by multiple Pods which might run on different worker nodes. Which type of storage is most appropriate for this requirement?
-
❏ A. GCE Persistent Disk
-
❏ B. HostPath
-
❏ C. NFS share
-
❏ D. Local Persistent Volume
In a pod manifest where do you declare CPU and memory requests and limits for an individual container?
-
❏ A. A LimitRange object declared in the namespace
-
❏ B. The resources attribute under the metadata section
-
❏ C. The resources field inside the containers definition
-
❏ D. A limits field directly under the spec section
For a team at NimbusSoft using Kubernetes manifests how do those YAML or JSON files express how resources should be configured and how does the cluster ensure the actual state matches that configuration?
-
❏ A. GKE Config Connector
-
❏ B. Manifests list explicit procedural steps that the cluster must execute to set up resources
-
❏ C. Manifest files declare the desired end state for Kubernetes objects and the control plane reconciles the live state to reach that declared state
-
❏ D. Manifests run imperative commands directly on individual nodes so that administrators can control each action
You are responsible for designing identity and access flows for a cloud native platform and you must adopt open standards for authentication and authorization. Which technology should you choose to provide standardized identity federation and authentication?
-
❏ A. API Key
-
❏ B. Basic authentication
-
❏ C. OpenID Connect (OIDC)
-
❏ D. Google Cloud IAM
How would you describe a Kubernetes cluster in terms of the machines it contains and the roles those machines perform?
-
❏ A. A group of linked servers used mainly for serving websites
-
❏ B. A set of nodes that cooperate to run containerized applications under Kubernetes control
-
❏ C. A collection of containers that all use the same IP address
-
❏ D. Google Kubernetes Engine
A team at Rivet Analytics runs a distributed processing workload that must execute on the same nodes as the data for optimal throughput and the pods need to be scheduled only on nodes that have local NVMe SSDs attached. Which Kubernetes features help ensure pods land on those SSD equipped nodes? (Choose 2)
-
❏ A. StatefulSet
-
❏ B. Taints and tolerations
-
❏ C. DaemonSet
-
❏ D. PodSecurityPolicy
-
❏ E. Node affinity
Which of the following would not be considered an advantage of using a GitOps workflow for deploying and managing applications?
-
❏ A. Enhanced team productivity
-
❏ B. Greater system reliability
-
❏ C. Longer time to market
-
❏ D. Accelerated feature delivery
-
❏ E. Consistent and standardized deployments
-
❏ F. Clearer audit trails
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
How does a Prometheus server obtain metrics from the services it monitors?
-
❏ A. By running a local agent on each monitored host that pushes metrics to the server
-
❏ B. By relying on a managed monitoring agent such as Cloud Monitoring agent
-
❏ C. By scraping application endpoints that serve metrics over HTTP
-
❏ D. By querying host hardware interfaces directly to extract system metrics
In a Kubernetes cluster what is the key distinction between a ConfigMap and a Secret when deciding where to place application settings and credentials?
-
❏ A. Secrets are automatically encrypted with Cloud KMS while ConfigMaps remain unencrypted
-
❏ B. ConfigMaps are intended for non sensitive configuration data while Secrets are used to hold sensitive information
-
❏ C. ConfigMaps are stored as versioned objects while Secrets are not
-
❏ D. ConfigMaps can only be mounted as files and Secrets may only be consumed as environment variables
In a cloud native Kubernetes deployment which category of workload is generally not appropriate to run as a Serverless Function as a Service?
-
❏ A. Cloud Run
-
❏ B. Continuous real time streaming pipelines
-
❏ C. Transactions that run for extended durations and require persistent state
-
❏ D. Batch analytics jobs
A payment processing startup deploys a log collector to run on every Kubernetes host and requires exactly one agent per node. Why would they select a DaemonSet rather than a Deployment?
-
❏ A. Horizontal Pod Autoscaler
-
❏ B. StatefulSet
-
❏ C. Cluster Autoscaler
-
❏ D. A DaemonSet runs one copy of the Pod on each node in the cluster
You operate a Kubernetes cluster for a startup called CloudHarbor and a development group deploys a pod into the default namespace without declaring resource requests or limits in the pod specification. What happens to the pod’s CPU and memory limits?
-
❏ A. Kubernetes applies a LimitRange default of 1 CPU and 512 MiB memory to the pod
-
❏ B. The pod will inherit any resource caps configured at the node level
-
❏ C. GKE Autopilot
-
❏ D. The pod will run without any explicit CPU or memory limits in the default namespace
When assigning a pod to a node in a cluster what scheduling phases does the kube scheduler run to choose the most suitable node?
-
❏ A. Prioritization and container image availability
-
❏ B. Preemption and PodDisruptionBudget checks
-
❏ C. Filtering and scoring stages
-
❏ D. Sharding and certificate signing
Your team must assign the same permissions to users and ServiceAccounts across several namespaces in a Kubernetes cluster. Which Kubernetes object should you create?
-
❏ A. Role
-
❏ B. ClusterRoleBinding
-
❏ C. RoleBinding
-
❏ D. ClusterRole
How would you describe a “full stack” developer position within a software engineering group?
-
❏ A. A person who can consume an entire stack of pancakes
-
❏ B. An engineer who focuses on both IPv4 and IPv6 network protocols
-
❏ C. A specialist who designs and secures large database systems that serve dual stack applications
-
❏ D. A practitioner who develops both frontend user interfaces and backend services
A platform team at Nimbus Systems is seeing pods fail to start because the node runs out of disk space when the container runtime attempts to create containers. The application does not need to keep large files and pods should not accumulate large logs or temporary data. What action should you take to resolve the immediate failures and to prevent this from happening again?
-
❏ A. Switch all writable data to emptyDir volumes to avoid using node disk
-
❏ B. Use Google Cloud Logging retention and export to reduce local log storage
-
❏ C. Apply a LimitRange in the namespace to enforce ephemeral storage defaults and maximums
-
❏ D. Implement log rotation and define ephemeral storage requests and limits for pods
Which statement best explains what a Kubernetes Service provides within a cluster?
-
❏ A. A collection of rules that governs ingress and egress connectivity between pods
-
❏ B. An object for running batch or finite tasks that start, complete and then stop
-
❏ C. A stable network endpoint that load balances connections to a set of pods
-
❏ D. A resource that ensures a workload runs with the desired number of replicas
A team at Bluewave Tech has built a new container image for their service. When they follow a GitOps process using ArgoCD what must they do to deploy the updated image to the Kubernetes cluster?
-
❏ A. Rely on the ArgoCD operator to detect the new image and update the cluster automatically
-
❏ B. Push the updated image to Artifact Registry and configure a Cloud Build trigger to modify manifests in Git
-
❏ C. Update the Helm chart or Kubernetes manifest to reference the new image tag then commit the change and open a pull request
-
❏ D. Open the ArgoCD web UI and edit the application to change the image reference manually
When running Prometheus across several clusters what is the primary purpose of setting up federation between servers?
-
❏ A. Cloud Monitoring
-
❏ B. To combine separate Prometheus time series databases into a single storage
-
❏ C. To let one Prometheus server collect metrics from another Prometheus server
-
❏ D. To run a high availability pair of Prometheus servers
Instead of launching a container by itself what abstraction does Kubernetes schedule to run that container?
-
❏ A. Cloud Run
-
❏ B. a Helm chart
-
❏ C. a Deployment
-
❏ D. a Pod
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
Your operations team runs a containerized service on Kubernetes in a public cloud for a company called Meridian Labs and costs have risen because of oversized CPU and memory allocations and suboptimal scaling practices. You must adopt cost optimization measures that lower cloud spend without degrading application performance. Which approach is the most effective way to manage costs in this Kubernetes environment?
-
❏ A. Use the Kubernetes Horizontal Pod Autoscaler and configure it to scale out on memory metrics and scale in on CPU metrics
-
❏ B. Disable pod resource limits and rely on the cloud provider autoscaling to allocate capacity dynamically
-
❏ C. Assign static resource requests and limits for every pod that match the highest expected peak usage
-
❏ D. Deploy the Kubernetes Vertical Pod Autoscaler to continuously update pod resource requests based on observed usage over time
Which of the following would not be a benefit of moving applications to a cloud native architecture on a Kubernetes cluster?
-
❏ A. Faster release cycles and deployment velocity
-
❏ B. Cloud Run
-
❏ C. Vendor lock in
-
❏ D. Improved fault tolerance and uptime
A fintech startup is deploying a three tier application that includes a web UI and an API layer. The web UI must automatically locate and route requests only to healthy API instances. Which cloud native networking feature would enable that behavior?
-
❏ A. eBPF
-
❏ B. Service registry
-
❏ C. Cloud Load Balancing
-
❏ D. Automatic service discovery
Instead of embedding sensitive credentials like API keys or passwords inside a manifest file, where should you keep that confidential information so it is not exposed in the resource definition?
-
❏ A. A Kubernetes ConfigMap
-
❏ B. Google Secret Manager
-
❏ C. Environment variables placed directly in the manifest file
-
❏ D. A Kubernetes Secret object
What is the main goal of applying topology spread constraints in a Kubernetes cluster?
-
❏ A. To require pods with matching labels to be co-located on the same node for performance reasons
-
❏ B. To distribute replicas of a workload evenly across failure domains and nodes to improve load distribution
-
❏ C. To prevent selected pods from running on the same node in order to increase fault isolation
-
❏ D. Taints and Tolerations
In a software firm how do the day to day responsibilities of a DevOps engineer differ from those of a Site Reliability Engineer in practice?
-
❏ A. DevOps engineers focus on automating builds releases and developer workflows
-
❏ B. Site Reliability Engineers concentrate on operational reliability and are measured against SLOs SLIs and SLAs
-
❏ C. DevOps roles generally command higher salaries because they are more sought after
-
❏ D. A DevOps engineer only handles deploying applications to a public cloud account
Which of these does not operate as a core control plane process on every control plane host in a Kubernetes cluster?
-
❏ A. etcd
-
❏ B. kubelet
-
❏ C. API server
-
❏ D. scheduler
At a cloud native startup you run a Kubernetes cluster that must host stateful services which require durable volumes and the available storage backends differ in performance and features. You need a method to ensure each service is paired with the correct storage tier. How can you accomplish this in Kubernetes?
-
❏ A. Filestore
-
❏ B. Develop a custom scheduler that is aware of storage characteristics
-
❏ C. Define StorageClasses and request them with PersistentVolumeClaims
-
❏ D. Label nodes and use nodeSelector in Pod specifications to target nodes with the proper storage
Which practice is not recommended when managing container images for a cloud native application running on a Kubernetes cluster?
-
❏ A. Use an image registry that provides automated vulnerability scanning
-
❏ B. Store images in a private registry with access controls
-
❏ C. Deploy production images using the “latest” tag
-
❏ D. Tag images using semantic versioning for each release
Which Prometheus component coordinates alert handling and delivers notifications to administrators and external channels such as email and Slack and webhook endpoints?
-
❏ A. Cloud Monitoring
-
❏ B. Pushgateway
-
❏ C. Alertmanager
-
❏ D. Exporters
Why would a team create namespaces in a Kubernetes cluster and what primary role do those namespaces perform?
-
❏ A. Segment resources to apply per team resource quotas and network policies
-
❏ B. Keep individual nodes isolated so they do not share underlying resources
-
❏ C. Divide pods services and other resources into distinct virtual clusters inside a single physical cluster
-
❏ D. Enforce cluster wide authentication and authorization at the top level
Which practice most strongly supports continuous availability and fault tolerance for the Kubernetes control plane in a production cluster?
-
❏ A. Reducing the number of control plane nodes to simplify management
-
❏ B. Running redundant control plane components on several separate nodes
-
❏ C. Isolating control plane nodes at the network level
-
❏ D. Google Kubernetes Engine
Practice AZ-104 Questions Answered
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
If a client successfully authenticates to a Kubernetes API server but does not have permission to carry out the requested operation what response does the API server return?
-
✓ C. The API server denies the operation and returns an HTTP “403 Forbidden” response
The correct option is The API server denies the operation and returns an HTTP “403 Forbidden” response.
This is correct because Kubernetes treats authentication and authorization as separate steps. A client can successfully authenticate and still lack permission to perform an action. In that case the authorization layer such as RBAC evaluates the request and the API server returns 403 Forbidden to indicate the caller is authenticated but not authorized to perform the requested operation.
The request is placed in a queue for manual review by a cluster operator is incorrect because the API server does not queue requests for manual approval when authorization fails. The server evaluates and rejects the request immediately and returns an HTTP error response.
The server processes the call but only returns non sensitive public information is incorrect because the API server does not partially process an unauthorized request and selectively return public data. The request is denied and the client receives an error rather than a filtered successful response.
The request is recorded in Cloud Audit Logs for later review is incorrect as the immediate API response is not to record and defer the decision. Audit logging may be configured separately and may record attempts in some environments, and Cloud Audit Logs is a GCP service rather than a general Kubernetes response. The presence of audit logs does not replace the API server returning 403 Forbidden for unauthorized requests.
Remember that authentication verifies identity and authorization grants permissions. If an authenticated user lacks permission the API server will usually return an HTTP 403 Forbidden error.
In system design what does the term scale out mean when discussing ways to increase system capacity?
-
✓ C. Provisioning additional servers or instances so that traffic and workloads are distributed across multiple hosts
Provisioning additional servers or instances so that traffic and workloads are distributed across multiple hosts is correct.
Scale out refers to increasing capacity by adding more machines or instances so that work can be distributed across multiple hosts. This approach increases throughput by parallelizing traffic and workloads and it also improves fault tolerance because load is not concentrated on a single machine.
Scale out is commonly implemented with load balancers and autoscaling groups so new instances can be added or removed automatically based on demand. It works best with stateless services or with systems that replicate state so that any instance can handle a portion of the load.
Merging several virtual servers into a single more powerful host to reduce operational overhead is not scale out. That option describes consolidation or moving to a larger host and it reduces the number of machines rather than increasing them.
Refactoring application code to reduce resource usage and speed up execution is a performance optimization and it can reduce cost or improve efficiency. It does not increase capacity by adding hosts and so it is not what scale out means.
Upgrading a single machine by increasing its CPU memory or storage to handle greater demand is not scale out. That option describes scale up or vertical scaling and it focuses on making one machine more powerful instead of adding more machines.
When a question mentions adding more instances or distributing load think scale out. If it mentions making one machine bigger think scale up.
A payments startup runs workloads on a managed Kubernetes platform and needs to ensure GPU heavy jobs are scheduled only on a designated set of nodes. What is the recommended cloud native approach to enforce this using node pools?
-
✓ B. Provision a dedicated GPU node pool and apply node taints with matching pod tolerations
The correct option is Provision a dedicated GPU node pool and apply node taints with matching pod tolerations.
This approach creates a separate set of nodes that contain the GPU hardware and it uses taints to prevent non GPU pods from being scheduled there unless they explicitly tolerate the taint. Pod tolerations on GPU workloads allow only those pods to use the GPU node pool and that enforces exclusivity while keeping scheduling native to Kubernetes.
Label pods with gpu=true and rely on the scheduler to match them to GPU nodes is incorrect because pod labels alone do not prevent other pods from running on GPU nodes and they require corresponding node labels to work. Labels help selection but they do not reserve nodes or enforce isolation.
Use node affinity only to guide GPU workloads to specific nodes is incorrect because affinity can steer placement but it does not reserve nodes or stop other pods from being scheduled there. Affinity can be part of the solution but it needs taints and tolerations to guarantee exclusivity.
Enable GKE node auto provisioning and install the Kubernetes device plugin for GPUs is incorrect because the device plugin is needed to expose GPUs and auto provisioning can create GPU nodes but neither feature by itself guarantees that GPU heavy jobs will be scheduled only on a designated node pool. You still need a dedicated node pool and taints with matching tolerations to enforce that policy.
When a question asks about reserving nodes look for answers that mention dedicated node pools together with taints and tolerations because those provide both placement and enforcement.
A team at TidalApps is deploying a service that requires a common filesystem accessible by multiple Pods which might run on different worker nodes. Which type of storage is most appropriate for this requirement?
-
✓ C. NFS share
The correct option is NFS share.
NFS share is a network file system that can be mounted by multiple Pods on different worker nodes and it supports the Kubernetes access mode ReadWriteMany so multiple consumers can read and write the same filesystem concurrently.
GCE Persistent Disk is block storage that is typically attached to a single node for read and write access and it does not provide a POSIX shared filesystem across nodes in standard Kubernetes setups, so it does not meet the requirement for a common filesystem accessible by Pods on different nodes.
HostPath mounts a directory from the node running the Pod and it is node local so Pods on other nodes cannot access the same host path concurrently, which makes it unsuitable for a cluster wide shared filesystem.
Local Persistent Volume binds storage to a specific node and exposes raw local disks or directories to Pods on that node only, so it cannot serve as a shared filesystem across multiple worker nodes even though it is useful for high performance local storage.
When a question asks for storage accessible by multiple Pods across nodes look for solutions that support ReadWriteMany or network mounted filesystems and exclude node local options.
In a pod manifest where do you declare CPU and memory requests and limits for an individual container?
-
✓ C. The resources field inside the containers definition
The resources field inside the containers definition is the correct answer.
You declare CPU and memory requests and limits per container inside the pod manifest under the container’s resources field. Values go under resources.requests and resources.limits for each entry in spec.containers so the scheduler and kubelet know how to allocate and enforce resources.
A LimitRange object declared in the namespace is not where you put the resource lines in a pod manifest. LimitRange can provide defaults or enforce minimum and maximum values in a namespace but it does not replace the per container resources section in the pod spec.
The resources attribute under the metadata section is incorrect because metadata stores name, labels, and annotations. The resources field belongs to the container specification and not to metadata.
A limits field directly under the spec section is wrong because limits must be nested inside each container’s resources field under spec.containers. A top level limits entry under spec is not the correct place for per container requests and limits.
When you are asked where to set resource requests and limits check inside each container at spec.containers[].resources and then look at the requests and limits entries.
For a team at NimbusSoft using Kubernetes manifests how do those YAML or JSON files express how resources should be configured and how does the cluster ensure the actual state matches that configuration?
-
✓ C. Manifest files declare the desired end state for Kubernetes objects and the control plane reconciles the live state to reach that declared state
Manifest files declare the desired end state for Kubernetes objects and the control plane reconciles the live state to reach that declared state is correct.
Manifest files are declarative YAML or JSON documents that describe the desired configuration of Kubernetes objects such as Pods, Deployments, and Services. The control plane stores those object definitions and controllers continuously compare the actual cluster state with the declared state and then take actions to reconcile any differences until the desired state is reached.
GKE Config Connector is incorrect because it is a Google Cloud add on that lets you manage GCP resources through Kubernetes custom resources and it does not describe the fundamental Kubernetes declarative model used by manifests.
Manifests list explicit procedural steps that the cluster must execute to set up resources is incorrect because manifests are not procedural runbooks. They declare what the end state should be and the control plane figures out the steps to get there.
Manifests run imperative commands directly on individual nodes so that administrators can control each action is incorrect because manifests are applied to the API server and acted on by controllers and kubelets. Manifests do not contain commands that are executed directly on nodes.
When you see questions about manifests favor answers that mention declarative configuration and the control plane reconciling state. Choices that mention step by step actions or running commands on nodes are usually wrong.
You are responsible for designing identity and access flows for a cloud native platform and you must adopt open standards for authentication and authorization. Which technology should you choose to provide standardized identity federation and authentication?
-
✓ C. OpenID Connect (OIDC)
The correct answer is OpenID Connect (OIDC).
OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0 and it provides standardized identity federation and authentication across different identity providers. It defines ID tokens, common claims, discovery endpoints and standard flows that allow cloud native platforms to delegate authentication to external providers in a consistent way.
API Key is incorrect because API keys are simple bearer tokens for service access and they do not provide federated user authentication or the standardized identity claims and flows that OIDC offers.
Basic authentication is incorrect because it is a legacy method that sends credentials directly and it does not support modern federated identity or token based claims. Many services consider basic authentication deprecated for user logins and prefer stronger, federated methods like OIDC.
Google Cloud IAM is incorrect because it is a provider specific access control and identity system and it is not an open standard for federation. Google Cloud IAM can integrate with OIDC providers but it is not itself the cross provider standard that the question asks for.
When a question asks about standardized identity federation or single sign on look for mentions of OpenID Connect or OIDC and rule out provider specific answers and simple token methods like API keys.
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
How would you describe a Kubernetes cluster in terms of the machines it contains and the roles those machines perform?
-
✓ B. A set of nodes that cooperate to run containerized applications under Kubernetes control
A set of nodes that cooperate to run containerized applications under Kubernetes control is the correct description of a Kubernetes cluster.
The phrase a set of nodes that cooperate to run containerized applications under Kubernetes control captures the core idea because a cluster is made of machines or virtual machines that act as nodes and they work together to host pods and containers under the management of the control plane and kubelet agents.
A Kubernetes cluster includes the control plane components that schedule work and maintain cluster state and it includes worker nodes that run the containerized workloads and provide networking and storage resources for pods.
A group of linked servers used mainly for serving websites is incorrect because a Kubernetes cluster is not defined by the type of applications it runs or by a single purpose such as serving websites.
A collection of containers that all use the same IP address is incorrect because a cluster is about the nodes and orchestration rather than a group of containers sharing an IP address, and containers in Kubernetes run inside pods which have their own networking model.
Google Kubernetes Engine is incorrect because it is a managed Kubernetes service offered by Google and not a definition of what a cluster is. It is a product that creates and manages clusters rather than being the conceptual description of a cluster.
When you see answer choices mention nodes and containerized applications you should think of Kubernetes core concepts. Focus on whether the option describes orchestration and cooperating machines rather than a specific product or a narrow use case.
A team at Rivet Analytics runs a distributed processing workload that must execute on the same nodes as the data for optimal throughput and the pods need to be scheduled only on nodes that have local NVMe SSDs attached. Which Kubernetes features help ensure pods land on those SSD equipped nodes? (Choose 2)
-
✓ C. DaemonSet
-
✓ E. Node affinity
The correct answers are DaemonSet and Node affinity.
Node affinity lets you require that pods be scheduled only on nodes that have specific labels, so you can label NVMe SSD equipped nodes and ensure the scheduler places your pods on those nodes when they request that affinity.
DaemonSet ensures a copy of the pod runs on each node that matches its node selector or affinity, which is ideal for running per-node data local processing so the workload runs on the same nodes as the local NVMe storage.
StatefulSet manages pods that need stable network identities and persistent storage but it does not itself guarantee scheduling to specific hardware or ensure a pod runs on every SSD node.
Taints and tolerations control which pods are allowed to schedule onto nodes by repelling pods without matching tolerations. They can help protect or reserve nodes but they do not by themselves provide the straightforward selection of SSD nodes that node labels plus node affinity provide.
PodSecurityPolicy is a security admission control that restricts pod capabilities and it does not control scheduling. It is also deprecated and removed from newer Kubernetes releases so it is less likely to appear as a correct choice on current exams.
When a question asks to schedule pods onto specific hardware look for answers that mention using node labels with Node affinity or running one pod per node with a DaemonSet.
Which of the following would not be considered an advantage of using a GitOps workflow for deploying and managing applications?
-
✓ C. Longer time to market
Longer time to market is the correct answer because it is not an advantage of using a GitOps workflow for deploying and managing applications.
GitOps automates deployments, records changes in Git, and enables automated reconciliation and rollbacks. Those capabilities typically reduce manual effort and accelerate delivery rather than create Longer time to market.
Enhanced team productivity is an advantage because storing desired state and deployment intent in Git reduces manual steps and makes collaboration and code review workflows natural for operators and developers.
Greater system reliability is an advantage because declarative manifests and continuous reconciliation help ensure the running cluster matches the desired state and recover from configuration drift.
Accelerated feature delivery is an advantage because GitOps enables continuous delivery patterns and repeatable deployments which let teams ship features faster and with more confidence.
Consistent and standardized deployments is an advantage because version controlled manifests and automated pipelines enforce the same deployment process across environments and reduce configuration variance.
Clearer audit trails is an advantage because every change is a Git commit that records who changed what and when which simplifies auditing and troubleshooting.
When a question uses the word not look for the choice that contradicts core GitOps benefits. Remember that GitOps aims to automate and standardize deployments so answers claiming slower delivery are usually wrong.
How does a Prometheus server obtain metrics from the services it monitors?
-
✓ C. By scraping application endpoints that serve metrics over HTTP
The correct option is By scraping application endpoints that serve metrics over HTTP.
Prometheus operates by periodically polling HTTP endpoints that expose metrics in the Prometheus exposition format or in OpenMetrics. Applications and exporters expose metrics on an HTTP path and Prometheus is configured with scrape targets and service discovery so it can collect those metrics at the configured intervals.
The pull model makes it easy to discover services and to control scrape frequency. For host and system metrics there are exporters such as node_exporter that read system information and then expose those metrics over HTTP for Prometheus to scrape.
By running a local agent on each monitored host that pushes metrics to the server is incorrect because Prometheus uses a pull based scrape workflow by default. There is a Pushgateway for short lived jobs that cannot be scraped but that is an optional exception rather than the normal collection method.
By relying on a managed monitoring agent such as Cloud Monitoring agent is incorrect because those agents are used to send metrics to specific cloud monitoring services and are not how Prometheus itself obtains metrics by default. You can integrate with other agents or exporters but Prometheus still scrapes endpoints it can reach over HTTP.
By querying host hardware interfaces directly to extract system metrics is incorrect because Prometheus does not directly query hardware interfaces. Instead exporters or local services collect hardware and OS metrics and expose them over HTTP for Prometheus to scrape.
When you see questions about Prometheus collection methods remember the key word scrape and think pull model versus push model. That will quickly point you to the right answer.
In a Kubernetes cluster what is the key distinction between a ConfigMap and a Secret when deciding where to place application settings and credentials?
-
✓ B. ConfigMaps are intended for non sensitive configuration data while Secrets are used to hold sensitive information
ConfigMaps are intended for non sensitive configuration data while Secrets are used to hold sensitive information is correct.
This is correct because ConfigMaps are designed to hold configuration such as feature flags endpoints and other non confidential settings while Secrets are intended to hold passwords tokens and keys that require restricted access. The cluster and platform determine how Secrets are protected and you should treat Secrets as sensitive even though basic encoding is used by default.
Secrets are automatically encrypted with Cloud KMS while ConfigMaps remain unencrypted is incorrect. Encryption using a cloud key management system is a cluster or provider feature and is not automatically applied to the Secret resource by Kubernetes itself.
ConfigMaps are stored as versioned objects while Secrets are not is incorrect. Both ConfigMaps and Secrets are regular Kubernetes API objects stored in etcd and both carry metadata such as resourceVersion so they can be updated and managed by the same API mechanisms.
ConfigMaps can only be mounted as files and Secrets may only be consumed as environment variables is incorrect. Both ConfigMaps and Secrets can be mounted as files or supplied as environment variables and there are additional consumption methods available through volumes and CSI drivers.
When deciding which to use focus on whether the data is sensitive and needs restricted access. Prefer Secrets for sensitive values and enable encryption at rest and proper RBAC on the cluster.
In a cloud native Kubernetes deployment which category of workload is generally not appropriate to run as a Serverless Function as a Service?
-
✓ C. Transactions that run for extended durations and require persistent state
Transactions that run for extended durations and require persistent state is the correct option.
Serverless functions are designed to be short lived and stateless. They scale to zero when idle and often have execution time limits and ephemeral local storage, so long running transactions that must maintain persistent state do not fit the FaaS execution model and should be run as stateful services or long running containers instead.
Cloud Run is not the correct choice because it is itself a serverless container platform that hosts stateless containerized workloads and is commonly used for serverless deployments. It is an example of a serverless hosting option rather than a workload category that would be excluded.
Continuous real time streaming pipelines is not the correct choice because many streaming tasks can be decomposed into small event driven functions and processed by serverless platforms when each processing step is short lived and stateless. If a pipeline requires long lived connections or heavy state then other patterns are needed, but streaming by itself is not generally excluded from serverless use.
Batch analytics jobs is not the correct choice because batch work is often composed of discrete, short lived tasks that can be mapped to serverless functions or serverless containers and benefit from automatic scaling. Very large or very long running analytics may need dedicated resources, but batch as a category is not the best choice for exclusion.
Focus on execution time and statefulness when choosing between FaaS and other deployment models. Long running or stateful workloads are usually the red flag that points away from serverless functions.
A payment processing startup deploys a log collector to run on every Kubernetes host and requires exactly one agent per node. Why would they select a DaemonSet rather than a Deployment?
-
✓ D. A DaemonSet runs one copy of the Pod on each node in the cluster
A DaemonSet runs one copy of the Pod on each node in the cluster is correct because the startup needs exactly one log collection agent per host.
A DaemonSet guarantees that the Pod is scheduled on every node and it will add or remove the Pod automatically as nodes join or leave the cluster so you end up with exactly one agent per node.
Horizontal Pod Autoscaler is wrong because it adjusts the number of Pod replicas based on metrics such as CPU or custom metrics and it does not enforce one Pod per node.
StatefulSet is wrong because it is designed for stateful applications that need stable network identities and persistent volumes and it does not ensure a Pod runs on every node.
Cluster Autoscaler is wrong because it scales the number of nodes in the cluster based on pending Pods and resource demands and it does not control Pod placement to guarantee one Pod per node.
When a question asks for exactly one Pod per node think DaemonSet and when it asks about scaling by load think Horizontal Pod Autoscaler.
You operate a Kubernetes cluster for a startup called CloudHarbor and a development group deploys a pod into the default namespace without declaring resource requests or limits in the pod specification. What happens to the pod’s CPU and memory limits?
-
✓ D. The pod will run without any explicit CPU or memory limits in the default namespace
The pod will run without any explicit CPU or memory limits in the default namespace is correct.
When a pod manifest omits resource requests and limits Kubernetes does not automatically create cgroup limits for that pod. Without requests or limits the container runs without explicit CPU or memory caps from the scheduler or kubelet. Such pods usually receive the BestEffort quality of service when no requests are set and they are more likely to be throttled in CPU contention and killed first under memory pressure.
Kubernetes applies a LimitRange default of 1 CPU and 512 MiB memory to the pod is wrong because a LimitRange only affects pods if a LimitRange object has been created in the namespace. There is no built in cluster default that assigns those specific values to every pod.
The pod will inherit any resource caps configured at the node level is wrong because node capacity and kubelet eviction thresholds influence scheduling and node stability but they do not automatically set per pod cgroup limits for pods that did not declare requests or limits.
GKE Autopilot is wrong in this question because Autopilot is a specific GKE operational mode that enforces and validates resource requests and limits. The question describes a generic cluster and does not state it is running in GKE Autopilot mode.
Assume no admission controllers or LimitRange objects are present unless the question says they exist and remember that Kubernetes will not enforce per pod limits unless they are declared or enforced by a policy.
When assigning a pod to a node in a cluster what scheduling phases does the kube scheduler run to choose the most suitable node?
-
✓ C. Filtering and scoring stages
The correct option is Filtering and scoring stages.
Filtering and scoring stages are the core phases the kube-scheduler uses to choose a node for a pod. In the filtering phase the scheduler removes nodes that cannot run the pod because of constraints like node selectors, taints and tolerations, resource requests, and affinity rules. In the scoring phase the scheduler ranks the remaining nodes using built in scoring functions such as resource balancing and topology awareness and then chooses the highest scoring node.
Prioritization and container image availability is incorrect because prioritization is an older term that maps to scoring and container image availability is validated by the kubelet or admission controllers rather than being a scheduler phase.
Preemption and PodDisruptionBudget checks is incorrect because preemption is a mechanism to make room for higher priority pods and it is not one of the primary node selection phases, and PodDisruptionBudgets govern voluntary disruptions and eviction policies rather than the scheduler scoring process.
Sharding and certificate signing is incorrect because those concepts are unrelated to pod scheduling. Sharding refers to data partitioning and certificate signing is part of cluster security and neither influences the scheduler node selection phases.
When you study scheduler behavior remember the scheduler first filters out unsuitable nodes and then scores the remaining nodes to pick the best fit.
Your team must assign the same permissions to users and ServiceAccounts across several namespaces in a Kubernetes cluster. Which Kubernetes object should you create?
-
✓ D. ClusterRole
The correct answer is ClusterRole.
A ClusterRole is a cluster scoped RBAC object that defines a reusable set of permissions which can be applied across namespaces. You create the ClusterRole once to capture the permission set and then bind it to users or ServiceAccounts in each target namespace with namespaced bindings. This avoids duplicating the same Role in every namespace.
You can pair a ClusterRole with an ClusterRoleBinding to grant those permissions across the entire cluster, but that grants access to all namespaces and to cluster scoped resources which may be broader than you intend.
Role is namespaced and only holds permissions for a single namespace. To use a Role across several namespaces you would need to create one per namespace, so it does not satisfy the requirement to define the permissions once for multiple namespaces.
ClusterRoleBinding is a binding and not a permission definition. It associates subjects with a role at the cluster level and can grant access cluster wide, so it is not the object you create when you want to define a reusable permission set.
RoleBinding is a namespaced binding that associates a Role or a ClusterRole with subjects in a single namespace. You must create a separate RoleBinding in each namespace to apply the permissions there, so the binding itself is not the single object that defines the permissions.
When you need the same permission set in multiple namespaces think in two steps. Create a ClusterRole to define permissions once and then use per-namespace _RoleBinding_s to assign those permissions where needed unless you truly need cluster wide access.
How would you describe a “full stack” developer position within a software engineering group?
-
✓ D. A practitioner who develops both frontend user interfaces and backend services
The correct option is A practitioner who develops both frontend user interfaces and backend services.
This option describes a full stack developer because the role covers both the client side and the server side of an application. Such a developer builds user interfaces, implements server logic and APIs, integrates with data stores, and often works with deployment and testing tools to deliver end to end features.
A person who can consume an entire stack of pancakes is incorrect because it is a humorous phrase and not a description of software engineering skills or responsibilities.
An engineer who focuses on both IPv4 and IPv6 network protocols is incorrect because that definition fits a network engineer. It emphasizes protocol expertise rather than building frontend interfaces and backend services.
A specialist who designs and secures large database systems that serve dual stack applications is incorrect because it describes a database or security specialist. The phrase dual stack usually refers to networking and not to the combined frontend and backend application development that defines a full stack role.
When you see a role described as handling both frontend and backend responsibilities that is a clear sign of a full stack position. Look for answers that mention both sides of the application.
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
A platform team at Nimbus Systems is seeing pods fail to start because the node runs out of disk space when the container runtime attempts to create containers. The application does not need to keep large files and pods should not accumulate large logs or temporary data. What action should you take to resolve the immediate failures and to prevent this from happening again?
-
✓ D. Implement log rotation and define ephemeral storage requests and limits for pods
Implement log rotation and define ephemeral storage requests and limits for pods is the correct action to resolve the immediate failures and to prevent this from happening again.
Enabling log rotation stops container logs from growing without bound on the node so the container runtime can create containers and the node does not run out of disk. Configure either application level rotation or kubelet/container runtime rotation settings to cap log file size and file count.
Defining ephemeral storage requests and limits gives the scheduler and kubelet visibility and enforcement of per-pod disk usage. Requests help scheduling decisions and limits let the kubelet evict pods that exceed their allowed ephemeral storage before the node becomes fully saturated.
Switch all writable data to emptyDir volumes to avoid using node disk is incorrect because emptyDir uses the node filesystem by default unless you explicitly use a memory backed emptyDir. Moving writable data to emptyDir without changing the medium will still consume node disk and can worsen disk pressure.
Use Google Cloud Logging retention and export to reduce local log storage is incorrect because cloud logging retention and export control what is kept in the central logging system and do not remove the local container log files that the container runtime and kubelet write on the node. This does not fix the immediate node disk exhaustion.
Apply a LimitRange in the namespace to enforce ephemeral storage defaults and maximums is incorrect as the sole remedy because LimitRange can set defaults for new pods but it does not retroactively change running pods and it does not implement log rotation or manage other node level disk usage. LimitRange is a useful guardrail but not sufficient to stop immediate failures.
Think about both an immediate remediation and a long term guardrail. Log rotation fixes current disk consumption and ephemeral-storage requests and limits prevent recurrence.
Which statement best explains what a Kubernetes Service provides within a cluster?
-
✓ C. A stable network endpoint that load balances connections to a set of pods
A stable network endpoint that load balances connections to a set of pods is correct because that succinctly describes the primary purpose of a Kubernetes Service within a cluster.
A Service provides a stable virtual IP and a DNS name that applications can use even when the underlying pods are created and destroyed. The Service selects matching pods and forwards traffic to their current IPs, and it also performs simple load balancing across those endpoints so clients do not have to track pod IPs themselves.
A collection of rules that governs ingress and egress connectivity between pods is incorrect because that describes a NetworkPolicy, which controls allowed traffic but does not provide a stable endpoint or load balancing.
An object for running batch or finite tasks that start, complete and then stop is incorrect because that describes a Job, which is used for one off or batch workloads and not for providing network endpoints.
A resource that ensures a workload runs with the desired number of replicas is incorrect because that describes a ReplicaSet or Deployment, which manage replica counts and rollout behavior but do not present a stable network endpoint or perform load balancing.
Look for keywords like stable endpoint and load balances when the question is about networking. Those phrases point to a Service rather than policies, jobs, or replica controllers.
A team at Bluewave Tech has built a new container image for their service. When they follow a GitOps process using ArgoCD what must they do to deploy the updated image to the Kubernetes cluster?
-
✓ C. Update the Helm chart or Kubernetes manifest to reference the new image tag then commit the change and open a pull request
Update the Helm chart or Kubernetes manifest to reference the new image tag then commit the change and open a pull request is correct.
This is because ArgoCD follows the GitOps model and treats the Git repository as the source of truth. You deploy a new image by updating the desired state in Git, for example by changing the image tag in the Helm chart or Kubernetes manifest and then creating a pull request. When that change is merged ArgoCD will detect the updated manifest and sync the cluster according to its configured sync policy.
Rely on the ArgoCD operator to detect the new image and update the cluster automatically is incorrect because ArgoCD does not by default scan container registries and replace image tags. Detecting and updating images in registries requires an additional tool or an image updater component and is not the built in behavior of ArgoCD.
Push the updated image to Artifact Registry and configure a Cloud Build trigger to modify manifests in Git is incorrect because although you can automate manifest edits with a CI trigger this describes extra CI configuration and is not the immediate GitOps step. The essential requirement is that the manifest or chart in Git must change so ArgoCD can apply the new desired state.
Open the ArgoCD web UI and edit the application to change the image reference manually is incorrect because making manual edits in the ArgoCD UI changes cluster state without updating Git. That breaks the GitOps practice of using Git as the single source of truth and such UI edits will be overwritten on the next sync from the repository.
Remember that ArgoCD expects the desired state in Git, so answers that mention making a commit and pull request to update manifests are usually the right choice.
When running Prometheus across several clusters what is the primary purpose of setting up federation between servers?
-
✓ C. To let one Prometheus server collect metrics from another Prometheus server
To let one Prometheus server collect metrics from another Prometheus server is the correct option.
Federation configures a Prometheus server to scrape another Prometheus server so a central instance can gather selected time series from remote instances. It works by targeting the other server’s /federate endpoint and by applying label filters to pull only the metrics you need.
This approach creates a hierarchical or global view across clusters without merging underlying storage. Each Prometheus still retains its own TSDB and federation simply makes metrics available to be scraped by another server for cross cluster queries and alerting.
Cloud Monitoring is incorrect because that option names a managed monitoring service and not the purpose of Prometheus federation. Federation is an internal Prometheus scraping mechanism and not a reference to cloud vendor monitoring products.
To combine separate Prometheus time series databases into a single storage is incorrect because federation does not merge TSDBs. Local storage remains independent and you would use remote write to send data to a common long term store if you want unified storage.
To run a high availability pair of Prometheus servers is incorrect because high availability is accomplished by running replicas that scrape the same targets and by designing alerting and external storage appropriately rather than by using federation as a replication mechanism.
When you see the word federation think of one Prometheus scraping another and not of merged storage or built in HA. Contrast federation with remote_write and with running replicas for availability.
Instead of launching a container by itself what abstraction does Kubernetes schedule to run that container?
-
✓ D. a Pod
The correct answer is a Pod.
A Pod is the smallest deployable unit in Kubernetes and it can host one or more containers. The Kubernetes scheduler assigns Pod*s to nodes and the node kubelet is responsible for starting the containers that belong to the *Pod. Controllers and higher level objects create and manage Pod*s but the actual scheduling is performed at the *Pod level.
Cloud Run is a managed serverless service on Google Cloud that runs containers outside of the Kubernetes scheduling primitives and it is not the object the kube-scheduler places on a node.
a Helm chart is a packaging and templating mechanism used to install Kubernetes resources. It generates manifests that may include *Pod*s and controllers but it is not itself scheduled by Kubernetes.
a Deployment is a controller that declares desired state and manages ReplicaSets and *Pod*s. It ensures the correct number of *Pod*s exist but the scheduler does not schedule a Deployment directly.
When you are asked what Kubernetes schedules look for the smallest deployable unit or the object that gets placed on a node. That usually points to the Pod.
Your operations team runs a containerized service on Kubernetes in a public cloud for a company called Meridian Labs and costs have risen because of oversized CPU and memory allocations and suboptimal scaling practices. You must adopt cost optimization measures that lower cloud spend without degrading application performance. Which approach is the most effective way to manage costs in this Kubernetes environment?
-
✓ D. Deploy the Kubernetes Vertical Pod Autoscaler to continuously update pod resource requests based on observed usage over time
Deploy the Kubernetes Vertical Pod Autoscaler to continuously update pod resource requests based on observed usage over time is the correct option.
Deploy the Kubernetes Vertical Pod Autoscaler to continuously update pod resource requests based on observed usage over time is effective because it rightsizes the actual resource requests that the scheduler uses, and that directly reduces wasted CPU and memory allocation without degrading performance. The Vertical Pod Autoscaler observes real usage and can recommend or apply adjusted requests for CPU and memory so pods are not permanently oversized.
Deploy the Kubernetes Vertical Pod Autoscaler to continuously update pod resource requests based on observed usage over time can operate in recommendation mode or in automatic mode where it updates requests and evicts pods when needed so changes take effect. This automated rightsizing removes the need to guess peaks and it lowers cloud spend by preventing long term overprovisioning.
Use the Kubernetes Horizontal Pod Autoscaler and configure it to scale out on memory metrics and scale in on CPU metrics is wrong because the Horizontal Pod Autoscaler changes the number of replicas rather than the per pod resource requests. Replica autoscaling does not fix oversized requests and can increase costs if each pod is still allocated too much CPU or memory.
Disable pod resource limits and rely on the cloud provider autoscaling to allocate capacity dynamically is wrong because removing resource limits allows pods to consume unbounded resources and can cause noisy neighbor problems and instability. Cloud provider autoscaling typically adds or removes nodes and does not replace proper per pod rightsizing, so costs and performance risk can increase.
Assign static resource requests and limits for every pod that match the highest expected peak usage is wrong because that approach leads to persistent overprovisioning for most of the time. Reserving peak capacity for every pod drives up infrastructure costs and defeats dynamic optimization based on actual usage.
When optimizing Kubernetes costs focus on actual resource requests and use automated rightsizing like the Vertical Pod Autoscaler rather than manual peak allocation.
Which of the following would not be a benefit of moving applications to a cloud native architecture on a Kubernetes cluster?
-
✓ C. Vendor lock in
The correct answer is Vendor lock in. This option names a drawback and not a benefit, so it is the one that would not be a benefit of moving applications to a cloud native architecture on a Kubernetes cluster.
Kubernetes and cloud native practices are designed to increase portability and reduce dependence on any single cloud provider. Containers, standard APIs, and open tooling make it easier to move workloads between environments, and continuous delivery patterns and infrastructure as code reduce coupling to provider specific features. For that reason vendor lock in is not a benefit and is the correct choice as the undesirable outcome.
Faster release cycles and deployment velocity is incorrect because moving to a cloud native architecture on Kubernetes typically enables faster, more frequent releases. Declarative manifests, rolling updates, health checks, and integration with CI CD pipelines all increase deployment velocity and reduce release risk.
Cloud Run is incorrect because it is a managed serverless product rather than a disadvantage. It is related to running containerized workloads and can complement cloud native approaches, so it is not the right choice when the question asks for something that would not be a benefit.
Improved fault tolerance and uptime is incorrect because Kubernetes provides primitives for replication, automatic restarts, self healing, and load balancing. Those features directly contribute to improved fault tolerance and higher availability when applications are designed for cloud native operation.
Read the question wording carefully and watch for negative phrasing. If the question asks for what would not be a benefit eliminate options that describe common Kubernetes advantages such as portability, faster releases, and self healing.
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
A fintech startup is deploying a three tier application that includes a web UI and an API layer. The web UI must automatically locate and route requests only to healthy API instances. Which cloud native networking feature would enable that behavior?
-
✓ D. Automatic service discovery
The correct answer is Automatic service discovery.
Automatic service discovery is the cloud native networking feature that lets a client such as the web UI dynamically find available API instances and route requests only to those that are healthy. It exposes service endpoints through DNS, APIs, or sidecar proxies and integrates with health checks so clients do not send traffic to failed backends.
Automatic service discovery is commonly implemented by platform primitives and service meshes. These implementations use a backing Service registry to record instances and health state and then present that information to clients so they can resolve and load balance to healthy targets.
eBPF is incorrect because it is a kernel level technology for packet processing, observability, and custom networking logic. It does not by itself provide a service level mechanism for automatically locating healthy service instances.
Service registry is incorrect as a standalone choice because it is a component used by discovery systems to store instance metadata. A registry supports discovery but the feature that performs dynamic resolution and client routing is the broader service discovery capability.
Cloud Load Balancing is incorrect because load balancers distribute traffic and can perform health checks at the network edge. They are not the same as the client side or platform feature that enables the web UI to automatically discover and resolve only healthy API instances, although load balancers may be used in combination with service discovery.
When a question mentions clients finding services and avoiding unhealthy instances look for answers about dynamic resolution or service discovery. If an option names a component like a registry or a kernel technology ask whether that item by itself provides automatic discovery.
Instead of embedding sensitive credentials like API keys or passwords inside a manifest file, where should you keep that confidential information so it is not exposed in the resource definition?
-
✓ D. A Kubernetes Secret object
The correct answer is A Kubernetes Secret object.
A Kubernetes Secret object keeps sensitive credentials out of manifest files by storing them in a dedicated Kubernetes resource that can be mounted into pods as files or injected as environment variables. You can restrict access with RBAC and enable encryption at rest so the sensitive data is not exposed in the resource definition itself.
A Kubernetes ConfigMap is designed for non sensitive configuration data and is stored in plain text, so it does not provide the confidentiality guarantees needed for credentials.
Google Secret Manager is an external cloud secret store and it can be used alongside Kubernetes, but the question is asking for the Kubernetes mechanism to avoid embedding secrets in manifest files and the expected answer is the native Secret object rather than an external service.
Environment variables placed directly in the manifest file embed secrets in plain text inside the resource definition and so they are exposed and should be avoided.
For questions about avoiding exposed credentials choose the Kubernetes native secret mechanism and remember that ConfigMap and in‑manifest environment variables are not secure.
What is the main goal of applying topology spread constraints in a Kubernetes cluster?
-
✓ B. To distribute replicas of a workload evenly across failure domains and nodes to improve load distribution
The correct option is To distribute replicas of a workload evenly across failure domains and nodes to improve load distribution.
Topology spread constraints instruct the scheduler to balance pod replicas across topology domains such as nodes and zones to avoid concentration of pods. They use a label selector and a topology key and the scheduler evaluates the configured skew to decide where to place new pods so that load and failure risk are spread more evenly.
The option To require pods with matching labels to be co-located on the same node for performance reasons is incorrect because that describes co-location which is handled by pod affinity or node selection, not by spread constraints that aim to balance rather than force co-location.
The option To prevent selected pods from running on the same node in order to increase fault isolation is incorrect because strict exclusion is primarily the domain of pod anti-affinity. Topology spread constraints focus on even distribution across domains and are not mainly intended as a pod exclusion mechanism.
The option Taints and Tolerations is incorrect because taints and tolerations are used to repel or allow pods on particular nodes and to control node level placement. They do not express distribution requirements across topology domains.
Read the wording carefully and map it to the scheduling primitives. Think spread or balance for topology spread constraints. Think together for affinity and apart for anti-affinity. Taints and tolerations control node level exclusions.
In a software firm how do the day to day responsibilities of a DevOps engineer differ from those of a Site Reliability Engineer in practice?
-
✓ B. Site Reliability Engineers concentrate on operational reliability and are measured against SLOs SLIs and SLAs
Site Reliability Engineers concentrate on operational reliability and are measured against SLOs SLIs and SLAs is the correct option.
Site Reliability Engineers are explicitly tasked with defining measurable indicators of service health and with keeping systems within agreed targets. They design monitoring and alerting, run incident response and postmortems, perform capacity planning and apply engineering practices to improve reliability over time.
DevOps engineers focus on automating builds releases and developer workflows is incorrect. DevOps engineers do work heavily on automation and developer workflows but DevOps is a set of cultural practices and toolchains rather than a role that is defined by operational reliability metrics.
DevOps roles generally command higher salaries because they are more sought after is incorrect. Compensation varies by region skills and seniority and it is not a reliable way to distinguish the day to day responsibilities of DevOps versus SRE.
A DevOps engineer only handles deploying applications to a public cloud account is incorrect. A DevOps engineer typically works across CI CD infrastructure as code automation and deployments in multiple environments and the statement is an overly narrow description of the role.
When choosing between SRE and DevOps answers look for language about measurable reliability objectives. SLO SLI and SLA terminology is a strong clue for SRE.
Which of these does not operate as a core control plane process on every control plane host in a Kubernetes cluster?
-
✓ B. kubelet
kubelet is the correct option because it is not considered a core control plane process that must operate on every control plane host in a Kubernetes cluster.
The kubelet is the node agent that manages pods and containers on an individual node and it belongs to the node components rather than the control plane. Core control plane components include etcd, the API server, and the scheduler and those are the processes that are expected to run as control plane processes on control plane hosts.
etcd is wrong because it is the cluster key value store that holds cluster state and it runs as a core control plane component on control plane hosts to provide the backing store for the API server.
API server is wrong because it is the central control plane process that exposes the Kubernetes API and it must run on control plane hosts for the cluster to function.
scheduler is wrong because it is the control plane component that assigns pods to nodes and it runs on control plane hosts to make scheduling decisions.
When in doubt think about the component’s role. If it manages node level containers it is likely kubelet and not a control plane process.
At a cloud native startup you run a Kubernetes cluster that must host stateful services which require durable volumes and the available storage backends differ in performance and features. You need a method to ensure each service is paired with the correct storage tier. How can you accomplish this in Kubernetes?
-
✓ C. Define StorageClasses and request them with PersistentVolumeClaims
The correct option is Define StorageClasses and request them with PersistentVolumeClaims.
Define StorageClasses and request them with PersistentVolumeClaims lets you create distinct storage tiers and lets applications request the exact tier they need by creating a PersistentVolumeClaim that references a StorageClass.
Using StorageClasses enables dynamic provisioning so the control plane will create volumes on the chosen backend when a matching PersistentVolumeClaim is submitted. You can encode performance characteristics and features as parameters in each StorageClass and control binding behavior and reclaim policy without coupling pods to specific nodes.
Filestore is not correct because it names a specific storage product and not the Kubernetes mechanism for selecting storage tiers. You could use Filestore as a backend but it does not answer how to instruct Kubernetes to pair workloads with different storage classes.
Develop a custom scheduler that is aware of storage characteristics is not the right choice because scheduling pods to nodes is separate from volume provisioning and binding. Kubernetes already provides StorageClasses and the volume controller for allocation so a custom scheduler is unnecessary and overly complex for this requirement.
Label nodes and use nodeSelector in Pod specifications to target nodes with the proper storage is not ideal because it couples workloads to particular nodes and it does not leverage dynamic provisioning. Node labels can work for node local disks but they do not provide a portable or scalable way to request different managed storage tiers across environments.
When you need to match workloads to storage tiers think of StorageClasses and PersistentVolumeClaims first. They are the Kubernetes native way to express storage requirements and enable dynamic provisioning.
Which practice is not recommended when managing container images for a cloud native application running on a Kubernetes cluster?
-
✓ C. Deploy production images using the “latest” tag
The correct answer is Deploy production images using the “latest” tag.
Using the “latest” tag for production images is not recommended because it makes the deployed artifact ambiguous and non repeatable. If the tag is moved or overwritten you cannot be sure which exact image is running and that makes debugging and auditing much harder.
Production workflows benefit from immutable references and clear versioning. Pinning to image digests or explicit version tags ensures reproducible deployments and safe rollbacks, and avoids surprises that come from automatically changing content when the same tag is reused.
Use an image registry that provides automated vulnerability scanning is not correct because automated scanning is a recommended practice. Scanning helps detect known vulnerabilities before images are promoted to production.
Store images in a private registry with access controls is not correct because using a private registry and controlling access reduces the attack surface and prevents unauthorized image pushes and pulls.
Tag images using semantic versioning for each release is not correct because semantic versioning provides clear intent and traceability for releases and it supports predictable deployment and rollbacks.
When answering, remember that production images should be immutable and explicitly versioned. Watch for options that recommend using the latest tag and treat them as risky for production.
All Azure questions are from my AZ-104 Udemy Course and certificationexams.pro
Which Prometheus component coordinates alert handling and delivers notifications to administrators and external channels such as email and Slack and webhook endpoints?
-
✓ C. Alertmanager
The correct option is Alertmanager.
The Alertmanager receives alerts from Prometheus servers and it coordinates grouping, silencing, routing and delivery of those alerts. It sends notifications to administrators and external channels such as email, Slack and webhook endpoints according to configured routes and receivers.
Cloud Monitoring is a managed cloud provider monitoring service and not the Prometheus component responsible for routing or delivering alerts to notification channels.
Pushgateway is used for pushing metrics from short lived jobs into Prometheus and it does not handle alert processing or notification delivery.
Exporters expose application or system metrics for Prometheus to scrape and they do not perform alert handling or send notifications.
When a question mentions routing, silencing, grouping or sending notifications think of Alertmanager and not of exporters or the Pushgateway. Look for words like routing, receivers and silences to spot the right component.
Why would a team create namespaces in a Kubernetes cluster and what primary role do those namespaces perform?
-
✓ C. Divide pods services and other resources into distinct virtual clusters inside a single physical cluster
Divide pods services and other resources into distinct virtual clusters inside a single physical cluster is correct.
namespaces partition the Kubernetes API space so that pods services and other objects can live in separate logical groups inside the same physical cluster. This creates isolated virtual clusters that prevent name collisions and provide a clear scope for resources.
namespaces are the mechanism on which per namespace resource quotas and network policies are applied, but their primary role is the logical division of the cluster into virtual clusters rather than being those policies themselves.
Segment resources to apply per team resource quotas and network policies is misleading because it describes useful capabilities that run inside namespaces but it frames those capabilities as the primary purpose. The core purpose is creating distinct virtual clusters and quotas and policies are applied within that scope.
Keep individual nodes isolated so they do not share underlying resources is incorrect because namespaces are a logical construct and do not isolate or partition nodes. Node level isolation and resource boundaries are handled by node configuration scheduling taints and similar features.
Enforce cluster wide authentication and authorization at the top level is wrong because authentication and authorization are handled by the API server and systems like RBAC. Namespaces help scope permissions but they do not implement cluster wide auth mechanisms themselves.
Focus on the primary function described in the question and avoid options that name secondary capabilities. Remember that namespaces create logical virtual clusters and that quotas and network policies are applied within those namespaces.
Which practice most strongly supports continuous availability and fault tolerance for the Kubernetes control plane in a production cluster?
-
✓ B. Running redundant control plane components on several separate nodes
The correct answer is Running redundant control plane components on several separate nodes.
Running redundant control plane components on several separate nodes is the strongest practice for continuous availability because it removes single points of failure and allows the control plane to tolerate one or more node failures while continuing to serve the cluster. Distributing API server instances and etcd members across multiple machines preserves quorum and enables leader election and failover so the cluster can remain functional during maintenance or hardware outages.
Reducing the number of control plane nodes to simplify management is incorrect because fewer nodes increase the risk of a single point of failure and make it harder to maintain quorum for etcd. Simplifying management by cutting nodes reduces availability and fault tolerance.
Isolating control plane nodes at the network level is incorrect because network isolation can improve security but it does not by itself provide redundancy or quorum. Isolation can even make recovery harder if it increases the chance of network partitions that interrupt control plane communication.
Google Kubernetes Engine is incorrect for this question because it names a managed product rather than a practice. GKE can provide a highly available control plane as a service in many cases but the question asks for the practice that most strongly supports continuous availability in a production cluster.
When a question asks about availability pick answers that emphasize redundancy and distribution across nodes and failure domains rather than options that reduce components or only change network placement.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
