Free GH-200 GitHub Actions Exam Questions and Answers
GH-200 GitHub Actions Certification Exam Simulator
Over the past few months I have been helping professionals who were displaced by the AI revolution move into new and exciting roles in tech. A key part of that transition is strengthening the resume, and the certification I want my clients to prioritize is the GH-200 GitHub Actions certification.
Whether you are a Scrum Master, Business Analyst, DevOps engineer, or senior software developer, the GH-200 GitHub Actions exam is my first recommendation.
Modern teams live and breathe automation. If you cannot design reliable workflows, manage secrets safely, and ship with repeatable pipelines, your impact will be limited.
GitHub Actions & Copilot exam simulators
The GitHub Actions exam validates real proficiency in workflow syntax, runners, permissions, caching, artifacts, environments, reusable workflows, and secure CI and CD practices.
Through my Udemy courses on Git, GitHub, and GitHub Actions, and through the free practice question banks at certificationexams.pro, I have seen which topics challenge learners the most. Based on thousands of student interactions and performance data, these are 20 of the toughest GH-200 GitHub Actions certification exam questions in the practice pool.
GitHub Certification Udemy Courses
You can find more practice questions in my GitHub Actions Udemy course. Each question is thoroughly answered at the end of the set, so take your time, think like a release engineer, and check your reasoning when you are done.
If you are preparing for the GH-200 GitHub Actions exam or exploring other certifications from AWS, GCP, or Azure, you will find hundreds of additional practice exams and explanations at certificationexams.pro.
All exam question come from my GitHub Action Udemy course and certificationexams.pro
These are not GitHub exam dumps or braindumps. Every question is original and designed to teach both the coverage of the exam and the strategy to answer scenario based questions. Each answer includes a tip and guidance to reinforce the concept.
Now let us dive into the 20 toughest GH-200 GitHub Actions certification exam questions. Good luck, and remember, strong careers in modern software begin with mastering automation.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
At Clearwater Systems you publish a custom JavaScript action that defines an input named build target and the calling workflow uses the default value. You want to read the value from the runner environment inside the action. How does GitHub form the environment variable name from that input?
-
❏ A. It keeps the name as entered and replaces spaces with hyphens
-
❏ B. It converts the name to lowercase and replaces spaces with underscores
-
❏ C. It prefixes the name with INPUT_, converts letters to uppercase, and replaces spaces with underscores
-
❏ D. It prefixes the name with GITHUB_ and leaves the original casing
At example.com the DevOps team plans a GitHub Actions workflow that uses a strategy matrix across several operating systems and runtime versions in a single run. They need to know the hard limit imposed by the platform on how many jobs a matrix expansion can create for one workflow run. What is the maximum number of jobs the matrix can generate in a single run?
-
❏ A. 512 jobs
-
❏ B. 256 jobs
-
❏ C. No limit on jobs
-
❏ D. 192 jobs
The compliance office at HarborWave Insurance wants to standardize how teams write GitHub Actions so that pipelines meet security and regulatory requirements across 28 repositories. Which approach will most effectively establish consistent and enforceable practices?
-
❏ A. Migrate pipelines to Cloud Build to gain centralized policy governance
-
❏ B. Configure organization environment variables to impose policy restrictions on every repository
-
❏ C. Publish organization-wide reusable workflows and starter templates that include approved security controls and checks
-
❏ D. Require code reviews with CODEOWNERS for any changes to workflow files in every repository
Blue Harbor Media uses GitHub Actions for CI where a single job defines a Postgres database service and a Redis cache service for integration tests. When does GitHub create these service containers and when are they removed?
-
❏ A. They are provisioned when the workflow starts and they are removed only after the workflow run finishes
-
❏ B. They start only when a step directly references the service and they stop when that step ends
-
❏ C. They are created at the beginning of each job and they are deleted once that job completes
-
❏ D. They are created one time for the runner session and they are reused across multiple jobs on the same runner
At RidgeTrail Analytics your GitHub workflow uses a Docker container action and a step intermittently fails after 90 seconds with a nonzero exit code. You need to review detailed execution logs produced by that container action to troubleshoot the issue. Where should you access those logs?
-
❏ A. Use git log to review repository commit history for the workflow
-
❏ B. View the build logs in Cloud Build in the Google Cloud console
-
❏ C. Open the workflow run in GitHub Actions and expand the container action step to view the step logs
-
❏ D. Run docker logs on the runner to read the container output
Your engineering group at Aurora Ledger is building GitHub Actions pipelines to deploy internal services to Google Cloud, and a custom action handles gcloud authentication and policy checks. Security policy forbids publishing this action, yet multiple private repositories in the same organization need to use it. What is the best way to enable reuse while keeping the action private?
-
❏ A. Convert the action to a container image and push it to Artifact Registry, then reference the image directly from workflows
-
❏ B. Grant workflows in selected private repositories permission to use an action stored in a separate private repository within the organization
-
❏ C. Host the action in a public repository and enforce branch protection and required reviews to control changes
-
❏ D. Copy the action code into each repository and include it directly in the workflow definitions
A developer at mcnz.com is configuring a GitHub Actions workflow and wants a variable created in one Bash step to be accessible to the steps that run later in the same job. What should they do?
-
❏ A. Manually pass the variable as an argument to every later step
-
❏ B. Run export VAR=value within the step
-
❏ C. Use echo to append VAR=value to the GITHUB_ENV file
-
❏ D. Write VAR=value to GITHUB_OUTPUT and treat it as a step output
A repository for a small fintech project on example.com uses GitHub Actions for continuous integration and one step needs to publish a build identifier so that a later step can read it. Which command should the step run to emit an output value that downstream steps can consume?
-
❏ A. export
-
❏ B. echo
-
❏ C. set
-
❏ D. printf
At Nova Edge Robotics you built a Docker container action in a GitHub repository. You committed files named dockerfile, action.yml, script.sh, and README in the repository root. The workflow fails during the image build step when the action runs. What is the most likely cause?
-
❏ A. The workflow must use Cloud Build to build the container image
-
❏ B. The action.yml file is ignored by container actions which causes the run to fail
-
❏ C. Docker expects a file named Dockerfile with an uppercase D and it fails when the file is named dockerfile
-
❏ D. The script.sh file must be referenced in the README for it to be executed
At NimbusWare Ltd you registered a new self-hosted runner at the organization level about 30 minutes ago and your team relies on the “ci-linux” runner group but the runner is not listed there. What is the most likely reason your team cannot use this runner?
-
❏ A. The runner is still completing setup tasks and has not yet become available
-
❏ B. Your team lacks permission to view the runner group where you expect the runner
-
❏ C. The runner entered the organization’s Default group and must be reassigned to the “ci-linux” group
-
❏ D. Network connectivity problems are stopping the runner from showing as online in the group
All exam question come from my GitHub Action Udemy course and certificationexams.pro
Samira is preparing reusable starter workflows for BlueOrbit Analytics so that teams across the organization can bootstrap new GitHub Actions pipelines consistently. Where should she place the workflow files and their metadata so that the templates are available to repositories in the organization?
-
❏ A. inside a directory named .github/workflow-templates
-
❏ B. inside a directory named workflow-templates within the current repository
-
❏ C. inside a directory named workflow-templates within the organization’s .github repository
-
❏ D. inside the .github/workflows directory of the current repository
RiverPoint Labs manages 24 GitHub Actions workflows and the team plans to override default environment variables by writing values to the GITHUB_ENV file so that changes apply across jobs and steps, which default environment variables are not allowed to be changed using GITHUB_ENV?
-
❏ A. CI
-
❏ B. NODE_OPTIONS
-
❏ C. Any variable prefixed with GITHUB_ or RUNNER_
-
❏ D. All default environment variables can be overridden
At Norlake Robotics your team wants to surface an error annotation in a workflow run without writing custom code in an action or script. What mechanism can you use to instruct the GitHub Actions runner to generate the same error annotation?
-
❏ A. actions/toolkit
-
❏ B. Environment variables with the RUNNER_* prefix
-
❏ C. GitHub Actions workflow commands
-
❏ D. The set-env command
The engineering team at example.com keeps a repository private on GitHub and wants to show a workflow status badge on an external status page. Why can the badge URL not be viewed by users outside GitHub?
-
❏ A. It only works with GitHub Enterprise Server installations
-
❏ B. The use of self hosted runners in the workflow disables public badge visibility
-
❏ C. GitHub blocks external access to private workflow badges to prevent unauthorized embedding or linking
-
❏ D. Badges are visible exclusively to repository collaborators and not to any external viewer
Priya plans to retrieve the log archive for a specific GitHub Actions workflow run in a public repository using the API. Which details must she provide to precisely request the correct logs?
-
❏ A. owner, repo, and job_id
-
❏ B. repo, authentication token, and run_id
-
❏ C. repository owner name, repository name, and the run_id of the workflow run
-
❏ D. owner, repo, and authentication token
Cedarbyte Studios manages GitHub Actions for 28 repositories under one organization and needs a shared cloud token that many workflows can use so that a single rotation will update all pipelines without editing each repository. What is the primary benefit of using organization-level secrets for this requirement?
-
❏ A. They restrict secret visibility to repository administrators only
-
❏ B. They let you reference one secret across multiple repositories so you avoid duplicating the same value
-
❏ C. They are intended to hold public configuration values that are not sensitive
-
❏ D. They can be created only for personal accounts and are not available to organizations
Ridgeway Media adopted GitHub Enterprise Cloud and has roughly 9,000 developers working across more than 150 repositories. You are drafting an organization wide standard so teams can design and share GitHub Actions workflows in a predictable way. To promote clarity discoverability and long term order, which elements should be captured in the documentation? (Choose 3)
-
❏ A. Plaintext inventories of organization secrets and tokens
-
❏ B. A catalog of repositories that host shared workflow templates and reusable actions
-
❏ C. Consistent file and folder naming rules for workflows and related artifacts
-
❏ D. A lifecycle and ownership plan that covers versioning and ongoing maintenance of workflows
Your team at Northwind Labs needs a workflow step that must run inside Alpine Linux 3.18 with bespoke CLI tools and pinned system libraries, and you want the action to consistently use that exact operating system and toolchain on any runner. Which type of GitHub Action should you implement to ensure this environment is always identical?
-
❏ A. Self-hosted runner
-
❏ B. JavaScript action
-
❏ C. Docker container action
-
❏ D. Composite action
BlueTrail Outfitters runs a GitHub Actions workflow that builds and deploys a container to Cloud Run in separate staging and production Google Cloud projects. The team wants to avoid hardcoding values like the API base URL for api.example.com and the logging level that vary by environment and they also do not plan to store secrets in these values. What is a typical reason to define custom environment variables in this setup?
-
❏ A. Control application logic and branching at runtime
-
❏ B. Configure Cloud Run CPU and memory limits
-
❏ C. Keep reusable non sensitive configuration outside the code
-
❏ D. Set absolute file paths for data input and output
In a GitHub repository, when you open the Actions tab and look at the list of workflow runs, which details appear in that list without drilling into an individual run? (Choose 3)
-
❏ A. The full workflow YAML definition for each run
-
❏ B. The status of each workflow run
-
❏ C. The branch associated with each workflow run
-
❏ D. The duration of each workflow run
All exam question come from my GitHub Action Udemy course and certificationexams.pro
Falcon Outfitters needs a GitHub Actions workflow that reacts to the pull_request event only when the activity is opened or reopened. In the workflow file syntax, which keyword defines the allowed activity types for an event and how would you configure it so that only those activities can trigger the workflow?
-
❏ A. Use the on keyword with a conditional expression that checks github.event.action
-
❏ B. Use the events keyword with a regular expression that matches event names
-
❏ C. Use the types keyword and list the specific activity names you want to allow
-
❏ D. Use the workflow keyword with a list of event names to include
Your team at Norwood Labs maintains a custom GitHub Action and you want to list it on GitHub Marketplace so other projects can install it. Which sequence of steps should you take to publish the action from its repository?
-
❏ A. Request manual review of the metadata file by GitHub, wait for approval, then tag a version and the Marketplace listing will appear automatically
-
❏ B. Build the action with Cloud Build, push artifacts to Artifact Registry, and link the project to a Marketplace listing from Google Cloud console
-
❏ C. Place the action metadata file at the repository root, draft a release, choose “Publish this Action to the GitHub Marketplace”, select appropriate categories, set a version tag, and publish the release
-
❏ D. Merge the action metadata file into the default branch, draft a release, select “Publish this Action to the GitHub Marketplace”, then choose categories without assigning a version tag
At Northwind Robotics you are deploying self-hosted runners for GitHub Actions on a private subnet in Google Cloud. To let the runners register and obtain job assignments from “GitHub” while keeping all inbound access blocked, which network capability must be in place?
-
❏ A. Open inbound firewall rules from “GitHub” to the runner hosts
-
❏ B. Enable Cloud NAT for internet egress from the private subnet
-
❏ C. Allow outbound HTTPS long polling from the runners to the required “GitHub” endpoints
-
❏ D. Force all egress through an HTTP proxy before reaching “GitHub”
An engineer at Alpine Fintech is creating GitHub Actions workflows that deploy to Google Cloud and is considering passing an API secret as a command line argument between jobs. Why should this approach be avoided?
-
❏ A. Command line tools never capture audit data so sensitive values cannot be recorded
-
❏ B. Shells and utilities automatically hide secret values that are provided as arguments
-
❏ C. Command line arguments might be visible to other users and might also appear in logs or audit trails which could reveal the secret
-
❏ D. It is considered best practice to pass secrets as command line parameters during builds
In a GitHub Actions workflow for a repository owned by example.com, the workflow level environment sets NAME to ‘Release Runner’. The test job sets JAVA_VERSION to ’17’ and a step called Print info defines MESSAGE as ‘This step runs in the Test job’. That step runs the command echo “Hi $NAME. $MESSAGE. Java $JAVA_VERSION” when a push to the main branch triggers the workflow. What exact string does that step print to the log?
-
❏ A. Cloud Build
-
❏ B. Hi Release Runner. $MESSAGE. Java 17
-
❏ C. Hi $NAME. $MESSAGE. Java $JAVA_VERSION
-
❏ D. Hi Release Runner. This step runs in the Test job. Java 17
BrightPixel Labs is consolidating its GitHub Actions pipeline for a service published to example.com. The team wants to coordinate which tasks can run at the same time and which must wait for others using explicit dependencies. In the workflow YAML, what does the jobs section define to enable this control?
-
❏ A. Cloud Build
-
❏ B. It sets the human readable name of the workflow
-
❏ C. It declares the workflow triggers
-
❏ D. It defines one or more jobs with their step lists and allows jobs to run in parallel or wait on dependencies
BrightCircuit plans to list a reusable GitHub Action on GitHub Marketplace, and the release checklist instructs the team to keep the repository limited to the action metadata, the implementation code, and only the files the action actually needs. What is the primary outcome of following this requirement when they publish the action?
-
❏ A. To reduce the repository footprint so developers can clone it more quickly
-
❏ B. To satisfy GitHub Marketplace requirements so the action can be published immediately without manual review
-
❏ C. To improve the repository’s placement in GitHub search results
-
❏ D. To integrate publishing with Cloud Build in Google Cloud projects
Your team at HarborPeak Labs is setting up a CI pipeline in GitHub Actions for a Python service and you are deciding when to use the “run” field in steps alongside reusable actions. What is a practical benefit of adding shell commands directly within job steps?
-
❏ A. Shell commands let you trigger a workflow manually from the GitHub Actions interface
-
❏ B. Shell commands allow workflows to deploy directly to Google Cloud without any additional tools
-
❏ C. Shell commands enable running tailored scripts and commands inside a job which provides flexibility
-
❏ D. Shell commands eliminate the need for GitHub Secrets when handling sensitive configuration
NimbusRetail operates several self-hosted GitHub Actions runners on its own servers and on a virtual machine in Google Cloud, and the team signs in to the GitHub web interface to review each runner’s current state; which status values might appear for a self-hosted runner on that page? (Choose 3)
-
❏ A. offline
-
❏ B. overloaded
-
❏ C. active
-
❏ D. idle
You maintain a widely used GitHub Actions workflow for the engineering group at example.com and you have seen releases drift when tags are moved or deleted. What is the most reliable way to pin action versions so builds remain consistent and supply chain risks stay low?
-
❏ A. Use branch names for versioning to avoid tag churn
-
❏ B. Pin each action reference to a full commit SHA so the target cannot change
-
❏ C. Binary Authorization
-
❏ D. Pin to an abbreviated commit SHA to keep the reference concise
CedarPeak Analytics wants to free space in GitHub Actions by removing older workflow artifacts from a private repository, and the team is planning a cleanup of items from past runs. Before proceeding with the deletions, what key factor should they keep in mind to prevent unintended data loss?
-
❏ A. You can open a support ticket to restore a deleted artifact within 48 hours
-
❏ B. You do not need write access to the repository to delete artifacts
-
❏ C. After deletion an artifact cannot be brought back
-
❏ D. Removing artifacts does not change your GitHub Actions storage usage
When creating a custom GitHub Action, which single filename should you use for the metadata file that defines the inputs, the outputs, and how the action runs?
-
❏ A. workflow.yaml
-
❏ B. action.yml
-
❏ C. requirements.yaml
-
❏ D. action.json
A development team at BlueRiver Labs uses GitHub Actions to run scripts in a monorepo and wants all run steps to use the same relative path without repeating the path in every step. What is the primary purpose of configuring a default working directory in the workflow?
-
❏ A. Limit script access to only certain folders on the runner
-
❏ B. Choose the directory where command output files are stored after scripts run
-
❏ C. Set the repository path where run steps execute so relative paths resolve correctly
-
❏ D. Improve job isolation to harden security for each script execution
At Aurora Analytics your platform team manages GitHub Actions pipelines for twelve repositories. Each workflow repeats the same tasks to prepare environments, run unit and integration tests, and deploy services. Keeping these duplicated steps in every workflow is tedious and difficult to maintain. Which GitHub Actions capability should you use to package the shared steps so they can be reused by many workflows with minimal configuration?
-
❏ A. Use a reusable workflow called with workflow_call to share a standard pipeline across repositories
-
❏ B. Create a Docker container action to isolate workflow commands inside a dedicated image
-
❏ C. Create a composite action that groups common steps into a single reusable action
-
❏ D. Develop a JavaScript action to implement custom workflow logic in Node.js
An engineer at mcnz.com is reviewing a GitHub Actions workflow and wants to understand how default environment variables behave during a run. Which statement best describes their availability across the workflow?
-
❏ A. Default environment variables can be referenced only through the env context defined in the workflow file
-
❏ B. Default environment variables are created only when a step writes to GITHUB_ENV during the run
-
❏ C. Default environment variables are visible only to steps that explicitly request them
-
❏ D. Default environment variables are predefined by GitHub and every step in the workflow can access them
All exam question come from my GitHub Action Udemy course and certificationexams.pro
At Clearwater Systems you publish a custom JavaScript action that defines an input named build target and the calling workflow uses the default value. You want to read the value from the runner environment inside the action. How does GitHub form the environment variable name from that input?
-
✓ C. It prefixes the name with INPUT_, converts letters to uppercase, and replaces spaces with underscores
The correct option is It prefixes the name with INPUT_, converts letters to uppercase, and replaces spaces with underscores.
GitHub exposes action inputs to the runtime as environment variables by constructing a name that begins with INPUT_ then converts the input name to all uppercase and replaces spaces with underscores. For an input named build target the resulting environment variable is INPUT_BUILD_TARGET which you can read from the runner environment in a JavaScript action.
It keeps the name as entered and replaces spaces with hyphens is incorrect because GitHub does not preserve the original casing and it does not use hyphens in environment variable names. Hyphens are not valid in typical shell variable names and GitHub replaces spaces with underscores.
It converts the name to lowercase and replaces spaces with underscores is incorrect because GitHub converts the name to uppercase and also adds the INPUT_ prefix.
It prefixes the name with GITHUB and leaves the original casing_ is incorrect because the GITHUB_ prefix is used for default workflow variables such as GITHUB_SHA and is not used for action inputs. Inputs are exposed with the INPUT_ prefix and with uppercase letters and underscores.
When options differ only by small details verify the exact transformation rules. Inputs use the INPUT_ prefix with uppercase letters and spaces become underscores. Watch for distractors that mention lowercase or hyphens.
At example.com the DevOps team plans a GitHub Actions workflow that uses a strategy matrix across several operating systems and runtime versions in a single run. They need to know the hard limit imposed by the platform on how many jobs a matrix expansion can create for one workflow run. What is the maximum number of jobs the matrix can generate in a single run?
-
✓ B. 256 jobs
The correct option is 256 jobs.
GitHub Actions sets a documented hard cap on matrix expansion per workflow run. A matrix can’t generate more than this cap, and if the calculated combinations would exceed it then the run fails rather than creating additional jobs. This is the platform behavior for matrix strategies across operating systems and runtime versions.
512 jobs is incorrect because the matrix limit is lower and the platform will not expand a matrix to that many jobs in a single run.
No limit on jobs is incorrect because GitHub Actions enforces strict limits on matrix expansion in each workflow run.
192 jobs is incorrect because it underestimates the allowed matrix size and does not match the documented maximum.
When a question asks for a hard platform limit, think of the vendor documentation and look for exact numbers. Memorize common GitHub Actions limits such as the matrix job cap and verify with official docs when in doubt.
The compliance office at HarborWave Insurance wants to standardize how teams write GitHub Actions so that pipelines meet security and regulatory requirements across 28 repositories. Which approach will most effectively establish consistent and enforceable practices?
-
✓ C. Publish organization-wide reusable workflows and starter templates that include approved security controls and checks
The correct option is Publish organization-wide reusable workflows and starter templates that include approved security controls and checks.
This approach centralizes the definition of secure pipeline logic so every repository can invoke the same vetted jobs and steps. Reusable workflows let you update one place and have changes propagate across all adopters which reduces drift and keeps controls consistent. Starter templates seed new repositories with the approved checks so teams begin compliant and stay aligned as they build.
It also makes enforcement practical because you can require teams to call the centrally maintained workflow and you can pair this with organization policies and repository rules to limit unapproved actions and to require pinned action versions. This reduces risk while preserving developer productivity.
Migrate pipelines to Cloud Build to gain centralized policy governance is not the most effective choice because it replaces GitHub Actions rather than standardizing it. This would introduce migration effort and tool fragmentation and it would not address how to keep existing workflow files consistent inside GitHub.
Configure organization environment variables to impose policy restrictions on every repository is incorrect because variables carry data and do not enforce behavior or security controls. They cannot mandate steps or checks which is what the compliance office needs.
Require code reviews with CODEOWNERS for any changes to workflow files in every repository improves oversight but it only gates changes to individual repositories and it does not provide a shared implementation of required controls. Reviews can still approve divergent logic which fails to ensure consistency across 28 repositories.
When many repositories are involved look for options that centralize definitions and make adoption repeatable and enforceable. Prefer mechanisms that teams can invoke from every repo rather than settings that only store values or reviews that depend on human judgment.
Blue Harbor Media uses GitHub Actions for CI where a single job defines a Postgres database service and a Redis cache service for integration tests. When does GitHub create these service containers and when are they removed?
-
✓ C. They are created at the beginning of each job and they are deleted once that job completes
The correct option is They are created at the beginning of each job and they are deleted once that job completes. This aligns with how GitHub Actions scopes service containers to the lifetime of the job so they are available to every step in that job and are then cleaned up when the job ends.
Services defined under a job are started before any steps in that job run and they persist for the duration of the job. GitHub then stops and removes them as part of job teardown. This ensures clean isolation between jobs and consistent environments for integration tests, including matrix jobs where each job gets its own fresh set of services.
They are provisioned when the workflow starts and they are removed only after the workflow run finishes is incorrect because services are not workflow scoped. They do not span multiple jobs and they do not live for the entire run.
They start only when a step directly references the service and they stop when that step ends is incorrect because services are not started lazily per step. They are brought up at job start and remain available to all steps until the job completes.
They are created one time for the runner session and they are reused across multiple jobs on the same runner is incorrect because GitHub hosted runners provide a fresh environment for each job and even with self hosted runners services defined in a job are still created and cleaned up per job rather than reused across jobs.
Map the lifetime to the scope. If services are defined under a job then they are job scoped and start before steps run and end when the job finishes. If wording suggests workflow scoped or reuse across jobs then it is likely a distractor.
At RidgeTrail Analytics your GitHub workflow uses a Docker container action and a step intermittently fails after 90 seconds with a nonzero exit code. You need to review detailed execution logs produced by that container action to troubleshoot the issue. Where should you access those logs?
-
✓ C. Open the workflow run in GitHub Actions and expand the container action step to view the step logs
The correct option is Open the workflow run in GitHub Actions and expand the container action step to view the step logs.
GitHub Actions captures the standard output and error streams from each step of a workflow, including Docker container actions, and presents them in the workflow run logs. Opening the run and expanding the job and then the specific step lets you see the container action’s detailed output, timestamps, and the nonzero exit code that explains the failure. You can also download the logs and, if needed, enable additional verbosity with the ACTIONS_STEP_DEBUG and ACTIONS_RUNNER_DEBUG secrets for deeper troubleshooting.
The option Use git log to review repository commit history for the workflow is incorrect because git log shows commit history rather than runtime execution logs from GitHub Actions steps or container actions.
The option View the build logs in Cloud Build in the Google Cloud console is incorrect because the container action runs on a GitHub Actions runner and its logs are stored with the GitHub workflow run rather than in Google Cloud Build.
The option Run docker logs on the runner to read the container output is incorrect because you do not have shell access to GitHub hosted runners and even with a self hosted runner the supported and centralized place to view step output is the workflow step logs in the Actions run.
When a question asks where to find execution output, look for the workflow run logs and then the specific step within the job. If you need more detail, remember you can enable debug logging for Actions to surface additional diagnostics.
Your engineering group at Aurora Ledger is building GitHub Actions pipelines to deploy internal services to Google Cloud, and a custom action handles gcloud authentication and policy checks. Security policy forbids publishing this action, yet multiple private repositories in the same organization need to use it. What is the best way to enable reuse while keeping the action private?
-
✓ B. Grant workflows in selected private repositories permission to use an action stored in a separate private repository within the organization
The correct option is Grant workflows in selected private repositories permission to use an action stored in a separate private repository within the organization. This allows teams to reuse the action across multiple private repositories while keeping the code private and under centralized control.
This approach uses GitHub�s built in support for private actions that can be shared to specific repositories in the same organization. You store the action in one private repository and then explicitly allow selected repositories to reference it with the normal uses syntax. Governance remains centralized because updates and reviews happen in a single place and access is restricted to only those repositories you approve.
Convert the action to a container image and push it to Artifact Registry, then reference the image directly from workflows is not the best solution because direct docker image references in the uses field are designed for publicly accessible images and do not provide an easy or supported way to authenticate to a private Google Artifact Registry. It also removes the benefits of action metadata and versioning that GitHub provides and it complicates security and maintenance.
Host the action in a public repository and enforce branch protection and required reviews to control changes violates the stated security policy that forbids publishing the action. Making the repository public would expose the action code to everyone which conflicts with the requirement to keep it private.
Copy the action code into each repository and include it directly in the workflow definitions creates duplication and maintenance risk because every fix must be propagated to many places and repositories can drift out of sync. It is harder to govern and audit changes compared to a single shared private source.
When you see a question about reusing a private GitHub Action across repositories, look for options that mention granting explicit access from a single private repository to selected repositories. Avoid answers that make the action public or that duplicate code.
A developer at mcnz.com is configuring a GitHub Actions workflow and wants a variable created in one Bash step to be accessible to the steps that run later in the same job. What should they do?
-
✓ C. Use echo to append VAR=value to the GITHUB_ENV file
The correct option is Use echo to append VAR=value to the GITHUB_ENV file.
Writing the variable assignment to the special environment file makes that variable available to all subsequent steps in the same job. GitHub Actions reads this file after the step finishes and exports those values for later steps. This is the supported way to persist environment variables across steps within a job.
Manually pass the variable as an argument to every later step is not required and it is fragile. It does not create a true environment variable that is automatically available and it forces you to thread values through each step one by one.
Run export VAR=value within the step only affects the current shell for that step. Each step runs in a new process so the exported value will not be present in later steps.
Write VAR=value to GITHUB_OUTPUT and treat it as a step output is meant for defining step outputs. Those values must be consumed through the steps context and they do not become environment variables for later steps by default.
When a question asks about variables across steps think environment files. Use GITHUB_ENV for environment variables and use GITHUB_OUTPUT for step outputs.
A repository for a small fintech project on example.com uses GitHub Actions for continuous integration and one step needs to publish a build identifier so that a later step can read it. Which command should the step run to emit an output value that downstream steps can consume?
-
✓ B. echo
The correct option is echo.
In GitHub Actions you publish a step output by appending a line in the form name equals value to the file referenced by the GITHUB_OUTPUT environment variable. You achieve this with echo by running echo “build_id=12345” >> “$GITHUB_OUTPUT”. Downstream steps can then read the value through the outputs of the step.
export only updates environment variables in the current shell and it does not create a step output that other steps can read.
set changes shell options or positional parameters and it does not write to the special file that GitHub Actions uses for outputs. Do not confuse it with the old workflow command that used to set outputs because that approach is deprecated and no longer recommended.
printf can print text but it does not by itself create a step output in GitHub Actions. The documented and expected method is to use echo to append the name and value to the GITHUB_OUTPUT file.
When a question mentions sharing values between steps think of writing name equals value to the GITHUB_OUTPUT file. Favor the documented command and watch for distractors that only set environment variables.
At Nova Edge Robotics you built a Docker container action in a GitHub repository. You committed files named dockerfile, action.yml, script.sh, and README in the repository root. The workflow fails during the image build step when the action runs. What is the most likely cause?
-
✓ C. Docker expects a file named Dockerfile with an uppercase D and it fails when the file is named dockerfile
The correct option is Docker expects a file named Dockerfile with an uppercase D and it fails when the file is named dockerfile.
This is the most likely cause because Docker looks for a default build file named Dockerfile and GitHub hosted runners are case sensitive. When the repository contains a file named dockerfile with a lowercase d the build step does not find Dockerfile and the container action fails during image build. You could use a different filename only if you explicitly direct Docker to it which the standard container action setup does not do.
The workflow must use Cloud Build to build the container image is incorrect because GitHub Actions builds container images with Docker on the runner and it does not require an external build service.
The action.yml file is ignored by container actions which causes the run to fail is incorrect because container actions require the metadata file and GitHub reads it to determine inputs and how to execute the action.
The script.sh file must be referenced in the README for it to be executed is incorrect because documentation files do not control execution. The script is run by the container entry point or by commands defined in the build and metadata.
When troubleshooting container action builds verify the exact case of the Dockerfile name and confirm that the metadata and build context point to the expected file.
At NimbusWare Ltd you registered a new self-hosted runner at the organization level about 30 minutes ago and your team relies on the “ci-linux” runner group but the runner is not listed there. What is the most likely reason your team cannot use this runner?
-
✓ C. The runner entered the organization’s Default group and must be reassigned to the “ci-linux” group
The correct option is The runner entered the organization’s Default group and must be reassigned to the “ci-linux” group.
When you register a new self-hosted runner at the organization level it is assigned to the organization’s default runner group unless you explicitly place it in a different group during setup or move it afterward. If your team relies on a custom group then the runner will not appear there until you change its group membership in the organization runner settings.
The runner is still completing setup tasks and has not yet become available is not the most likely cause because registration is typically fast and the runner would still be listed under its initial group even if it were offline.
Your team lacks permission to view the runner group where you expect the runner is unlikely in this scenario because lack of permission would prevent viewing the group itself rather than hiding only this newly added runner. The more common cause is that the runner is in the wrong group.
Network connectivity problems are stopping the runner from showing as online in the group does not explain why it is missing from the group listing. Connectivity issues affect online status but do not prevent the runner from appearing in its assigned group.
When a new organization runner is missing from a custom group first check whether it was placed in the default group and move it rather than assuming setup delays or connectivity issues.
Samira is preparing reusable starter workflows for BlueOrbit Analytics so that teams across the organization can bootstrap new GitHub Actions pipelines consistently. Where should she place the workflow files and their metadata so that the templates are available to repositories in the organization?
-
✓ C. inside a directory named workflow-templates within the organization’s .github repository
The correct option is inside a directory named workflow-templates within the organization’s .github repository. This is the required location for organization wide starter workflows so that repositories in the organization can discover and use them from the Actions new workflow experience.
GitHub reads starter workflow YAML files and their metadata from that location. When you place your templates together with the required properties file there, they are automatically offered to all repositories in the organization. This enables consistent bootstrapping and easier adoption.
inside a directory named .github/workflow-templates is not recognized for starter workflows. GitHub does not read templates from a nested path like this so the templates would not surface to repositories.
inside a directory named workflow-templates within the current repository limits visibility to a single repository. Organization wide templates must be stored at the organization level so this option would not make them available across the organization.
inside the .github/workflows directory of the current repository holds runnable workflows for that repository rather than shareable templates. Files here execute in that repository and are not presented as starter templates to other repositories.
When a question mentions organization wide starter workflows, map it to the special .github repository at the organization level and the workflow-templates folder.
RiverPoint Labs manages 24 GitHub Actions workflows and the team plans to override default environment variables by writing values to the GITHUB_ENV file so that changes apply across jobs and steps, which default environment variables are not allowed to be changed using GITHUB_ENV?
-
✓ C. Any variable prefixed with GITHUB_ or RUNNER_
The correct option is Any variable prefixed with GITHUB_ or RUNNER_.
Variables with these prefixes are reserved by GitHub and the runner. They are system defined values and are protected so attempts to change them using the GITHUB_ENV file will not take effect. This ensures the workflow execution environment remains consistent and secure.
The option CI is not covered by the reserved prefix rule. Although it is set by default, it can be changed within a workflow or by writing to the environment file for later steps in the same job, so it is not the option that cannot be changed.
The option NODE_OPTIONS does not use a reserved prefix and can be set when needed, so it is not the option that is blocked from being changed with the environment file.
The statement All default environment variables can be overridden is incorrect because some defaults are reserved and GitHub prevents overriding those, particularly the ones that use the reserved prefixes.
When a question involves the GITHUB_ENV file, look for prefix rules. Variables beginning with GITHUB or _RUNNER are _reserved and cannot be changed with GITHUB_ENV, which helps you quickly eliminate distractors.
At Norlake Robotics your team wants to surface an error annotation in a workflow run without writing custom code in an action or script. What mechanism can you use to instruct the GitHub Actions runner to generate the same error annotation?
-
✓ C. GitHub Actions workflow commands
The correct option is GitHub Actions workflow commands because you can emit the error command from a simple run step and the runner will create an error annotation without any custom code.
These commands are parsed by the runner when they are written to standard output. They support creating error annotations and can include file paths and line numbers so you can surface precise and actionable feedback directly in the workflow run and in the Checks interface.
actions/toolkit is a library used when writing a JavaScript action. Using it would require custom code which the question specifically rules out.
Environment variables with the RUNNER* prefix_ provide information about the runner environment. They do not instruct the runner to create annotations and therefore cannot generate an error annotation by themselves.
The set-env command was deprecated and later disabled for security reasons. It was used to set environment variables and not to create annotations which makes it unsuitable and less likely to appear as a correct choice on newer exams.
When a prompt says no custom code think of workflow commands that you can echo in a run step to create annotations such as errors or warnings.
The engineering team at example.com keeps a repository private on GitHub and wants to show a workflow status badge on an external status page. Why can the badge URL not be viewed by users outside GitHub?
-
✓ C. GitHub blocks external access to private workflow badges to prevent unauthorized embedding or linking
The correct option is GitHub blocks external access to private workflow badges to prevent unauthorized embedding or linking.
Private repository workflow badges require authentication and repository access. GitHub only serves the badge image to viewers who are signed in and authorized, which prevents the badge from loading on an external status page for users outside GitHub who lack access.
It only works with GitHub Enterprise Server installations is incorrect because workflow badges work on both GitHub Enterprise Server and GitHub.com. The limitation comes from private repository access rules rather than the hosting product.
The use of self hosted runners in the workflow disables public badge visibility is incorrect because runner type does not affect badge visibility. Badge access is controlled by repository privacy and authentication, not whether the job uses self hosted or GitHub hosted runners.
Badges are visible exclusively to repository collaborators and not to any external viewer is incorrect because badges for public repositories are visible to everyone. For private repositories the badge is visible to authenticated users with access, so the statement is overly absolute and does not reflect the actual permission checks.
When you see a question about badge visibility, look for hints about repository privacy and authentication. If an option blames infrastructure like runners or a specific edition, it is usually a distractor.
Priya plans to retrieve the log archive for a specific GitHub Actions workflow run in a public repository using the API. Which details must she provide to precisely request the correct logs?
-
✓ C. repository owner name, repository name, and the run_id of the workflow run
The correct option is repository owner name, repository name, and the run_id of the workflow run.
This is correct because the GitHub REST endpoint for downloading a workflow run log archive is addressed by the path parameters owner, repository, and the run ID. These three values uniquely identify which run you want and they map directly to the placeholders in the endpoint path for workflow run logs. An authentication token can be required for private repositories or to raise rate limits, yet it does not identify the specific run logs you are requesting.
owner, repo, and job_id is incorrect because a job ID targets the job logs endpoint rather than the workflow run logs endpoint. To download the run level log archive you must use the run ID, not the job ID.
repo, authentication token, and run_id is incorrect because it omits the owner. The owner is a required path parameter that is needed to locate the repository unambiguously, and a token is not an identifier for the resource.
owner, repo, and authentication token is incorrect because it omits the run ID, which is essential to select the exact workflow run whose logs you want. A token controls access but does not specify which run to fetch.
When you see API questions, match the action to the exact endpoint and list the required path parameters. Distinguish between identifiers like run_id or job_id and items like tokens that control access rather than identify a resource.
All exam question come from my GitHub Action Udemy course and certificationexams.pro
Cedarbyte Studios manages GitHub Actions for 28 repositories under one organization and needs a shared cloud token that many workflows can use so that a single rotation will update all pipelines without editing each repository. What is the primary benefit of using organization-level secrets for this requirement?
-
✓ B. They let you reference one secret across multiple repositories so you avoid duplicating the same value
The correct option is They let you reference one secret across multiple repositories so you avoid duplicating the same value.
Defining the token once at the organization scope allows many repositories to reference the same secret name in their workflows. You rotate the value in one place and every pipeline that uses it automatically picks up the change. This centralization reduces duplication, prevents drift across repositories, and simplifies auditing and access management.
They restrict secret visibility to repository administrators only is incorrect because access is not limited to administrators. Workflows can consume the secret based on repository and organization policies, and owners can scope availability to selected repositories rather than only to admins.
They are intended to hold public configuration values that are not sensitive is incorrect because secrets are for sensitive values. Non sensitive settings should use variables or committed configuration instead.
They can be created only for personal accounts and are not available to organizations is incorrect because organizations can create and manage secrets that are usable by one or many repositories.
When a question emphasizes many repositories and a single rotation think about centralizing the value at the organization scope and look for wording that indicates reuse of one secret across repositories.
Ridgeway Media adopted GitHub Enterprise Cloud and has roughly 9,000 developers working across more than 150 repositories. You are drafting an organization wide standard so teams can design and share GitHub Actions workflows in a predictable way. To promote clarity discoverability and long term order, which elements should be captured in the documentation? (Choose 3)
-
✓ B. A catalog of repositories that host shared workflow templates and reusable actions
-
✓ C. Consistent file and folder naming rules for workflows and related artifacts
-
✓ D. A lifecycle and ownership plan that covers versioning and ongoing maintenance of workflows
The correct options are A catalog of repositories that host shared workflow templates and reusable actions, Consistent file and folder naming rules for workflows and related artifacts, and A lifecycle and ownership plan that covers versioning and ongoing maintenance of workflows.
Documenting where shared templates and actions live establishes a single place to find reusable building blocks. This improves discovery for thousands of developers, enables clear contribution pathways, and supports governance by making ownership and permissions explicit.
Standardizing names and directory structures for workflow files and related artifacts reduces friction when reading, reviewing, and reusing automation. Placing workflows in predictable locations such as the .github directory or the .github/workflows folder and agreeing on consistent naming patterns makes navigation and search reliable across many repositories.
Defining ownership and lifecycle practices ensures reliability over time. Clear maintainers, versioning and release policies, deprecation and support timelines, and review cadences help teams safely consume updates. Pinning to tags or commit SHAs with documented change logs allows controlled adoption and rollback when needed.
Plaintext inventories of organization secrets and tokens are not appropriate because secrets must never be exposed in documentation. GitHub provides encrypted secrets and recommended patterns for rotation and least privilege, therefore the standard should describe how to manage secrets securely rather than listing them.
When evaluating choices, favor items that improve discoverability, consistency, and maintainability at scale, and reject anything that weakens security such as exposing secrets.
Your team at Northwind Labs needs a workflow step that must run inside Alpine Linux 3.18 with bespoke CLI tools and pinned system libraries, and you want the action to consistently use that exact operating system and toolchain on any runner. Which type of GitHub Action should you implement to ensure this environment is always identical?
-
✓ C. Docker container action
The correct option is Docker container action because it packages the action and its dependencies inside a container so it will always run in the same Alpine Linux 3.18 environment with the pinned system libraries and bespoke tools on any runner.
A Docker container action lets you define a Dockerfile that starts from alpine 3.18 and installs your custom CLI tools and pins library versions. GitHub builds or pulls that image and executes the action inside it which isolates execution from the host runner and guarantees the same operating system and system libraries every time.
Self-hosted runner is a runner choice rather than an action type and it ties execution to specific machines that you manage. It does not package the action’s environment so it cannot ensure the same Alpine 3.18 image and pinned libraries across any runner.
A JavaScript action runs on the host runner’s operating system using Node which means it inherits whatever system libraries and distribution the runner provides. It cannot enforce Alpine 3.18 or pinned system libraries by itself.
A Composite action only orchestrates other steps and runs in the surrounding job environment. By itself it cannot define or guarantee a specific operating system or toolchain.
Map the requirement to the action type. When the question emphasizes an identical OS and system libraries on any runner, choose a Docker container action. Use a composite action to reuse shell steps and a JavaScript action when you only need Node without OS guarantees.
BlueTrail Outfitters runs a GitHub Actions workflow that builds and deploys a container to Cloud Run in separate staging and production Google Cloud projects. The team wants to avoid hardcoding values like the API base URL for api.example.com and the logging level that vary by environment and they also do not plan to store secrets in these values. What is a typical reason to define custom environment variables in this setup?
-
✓ C. Keep reusable non sensitive configuration outside the code
The correct option is Keep reusable non sensitive configuration outside the code.
Defining environment variables lets the team supply values like the API base URL and the logging level per environment without changing application code. This approach centralizes configuration for staging and production, avoids duplication, and keeps the codebase clean while still allowing different values in different Google Cloud projects.
Because these values are not secrets, storing them as environment variables in the workflow or on the Cloud Run service is appropriate. The workflow can set or pass variables for each environment and Cloud Run can inject them into the container at runtime, which results in predictable and repeatable deployments.
Control application logic and branching at runtime is not the best match for this scenario since the goal is to externalize environment specific configuration rather than to drive feature flags or conditional flow. GitHub Actions already provides dedicated conditional expressions for workflow branching and the application should remain focused on reading configuration rather than deciding control flow based on it.
Configure Cloud Run CPU and memory limits is incorrect because those limits are part of the Cloud Run service configuration and are set with deployment flags or service settings rather than with application environment variables.
Set absolute file paths for data input and output is incorrect because containers on Cloud Run should not rely on absolute host paths and the filesystem is ephemeral. If file locations must be configured, it is better to reference external services such as Cloud Storage or use relative paths within the container, and this is unrelated to the need to keep environment specific configuration out of the code.
When a scenario mentions values that differ by environment and are non secret, choose environment variables. If an option talks about infrastructure limits or filesystem specifics, think service configuration or external storage instead.
In a GitHub repository, when you open the Actions tab and look at the list of workflow runs, which details appear in that list without drilling into an individual run? (Choose 3)
-
✓ B. The status of each workflow run
-
✓ C. The branch associated with each workflow run
-
✓ D. The duration of each workflow run
The correct options are The status of each workflow run, The branch associated with each workflow run, and The duration of each workflow run.
In the Actions tab list of runs you can immediately see the status for each run through its icon and label. The list also shows the branch that triggered the run and it includes the duration so you can compare how long recent runs took at a glance.
The full workflow YAML definition for each run does not appear in the list view. That level of detail requires opening the workflow file in the repository or drilling into a specific run.
When a prompt asks what is visible in a list view, visualize the columns you see in the UI and focus on summary fields like status, branch, and duration. If an option sounds like deep configuration such as a full YAML file, it usually requires opening the run or workflow details.
Falcon Outfitters needs a GitHub Actions workflow that reacts to the pull_request event only when the activity is opened or reopened. In the workflow file syntax, which keyword defines the allowed activity types for an event and how would you configure it so that only those activities can trigger the workflow?
-
✓ C. Use the types keyword and list the specific activity names you want to allow
The correct option is Use the types keyword and list the specific activity names you want to allow.
GitHub Actions lets you narrow an event to specific activity types using the types key under that event. To meet the requirement you configure the workflow so that it listens to the pull_request event and then you list only opened and reopened under the types key. Set the on key to pull_request and under pull_request add the types list with opened and reopened so the workflow runs only when a pull request is opened or reopened and it ignores other activities such as synchronize or edited.
Use the on keyword with a conditional expression that checks github.event.action is incorrect because an if condition controls jobs or steps after the workflow has already been triggered. It does not define which activity types are allowed to trigger the workflow in the first place, which is what the types key does.
Use the events keyword with a regular expression that matches event names is incorrect because there is no events key in the workflow syntax and regular expressions are not used to filter activity types. You filter activities with the types list under the specific event.
Use the workflow keyword with a list of event names to include is incorrect because there is no workflow key for event subscriptions. Triggers are defined with the on key and activity filtering is done with the types key.
When a question asks to limit an event to certain activities look for the types list nested under the specific event in the on section and include only the needed items such as opened and reopened.
Your team at Norwood Labs maintains a custom GitHub Action and you want to list it on GitHub Marketplace so other projects can install it. Which sequence of steps should you take to publish the action from its repository?
-
✓ C. Place the action metadata file at the repository root, draft a release, choose “Publish this Action to the GitHub Marketplace”, select appropriate categories, set a version tag, and publish the release
The correct option is Place the action metadata file at the repository root, draft a release, choose “Publish this Action to the GitHub Marketplace”, select appropriate categories, set a version tag, and publish the release.
To list an action on GitHub Marketplace you must have a valid action.yml or action.yaml in the repository root. You then create a release and select the publish to Marketplace option and choose categories that match your action. A semantic version tag such as v1 or v1.2.3 is required and the release will publish the listing. The repository must be public for the listing to appear in Marketplace.
Request manual review of the metadata file by GitHub, wait for approval, then tag a version and the Marketplace listing will appear automatically is incorrect because publication happens through the release flow where you opt in to Marketplace and there is no separate manual review step for actions. Tagging alone does not create a listing.
Build the action with Cloud Build, push artifacts to Artifact Registry, and link the project to a Marketplace listing from Google Cloud console is incorrect because Marketplace listings are managed on GitHub. Even when an action uses a container image the listing is created by releasing on GitHub rather than through an external cloud console.
Merge the action metadata file into the default branch, draft a release, select “Publish this Action to the GitHub Marketplace”, then choose categories without assigning a version tag is incorrect because a version tag is required to publish. Without a tag the release cannot be listed and GitHub recommends semantic versioning for stability and discoverability.
When a question mentions Marketplace for actions look for release steps that include a version tag and the Publish this Action to the GitHub Marketplace checkbox. Be cautious of steps that mention external cloud consoles or a manual review since publication is handled within GitHub releases.
At Northwind Robotics you are deploying self-hosted runners for GitHub Actions on a private subnet in Google Cloud. To let the runners register and obtain job assignments from “GitHub” while keeping all inbound access blocked, which network capability must be in place?
-
✓ C. Allow outbound HTTPS long polling from the runners to the required “GitHub” endpoints
The correct option is Allow outbound HTTPS long polling from the runners to the required “GitHub” endpoints.
Self hosted runners initiate and maintain HTTPS connections to GitHub in order to register, receive job assignments, and report results. They use long polling over port 443 and do not require any inbound connections. This design allows you to keep all inbound access blocked on the private subnet while still enabling the runners to function normally as long as they can reach the necessary endpoints on the internet.
Open inbound firewall rules from “GitHub” to the runner hosts is incorrect because the service does not initiate inbound connections to your runners. Opening inbound access is unnecessary and it conflicts with the requirement to keep inbound blocked.
Enable Cloud NAT for internet egress from the private subnet is incorrect as the technology can provide a path to the internet but it is only one way to achieve egress. The essential requirement is that runners can reach GitHub over HTTPS. NAT can help but it is not the specific capability being asked for and it is not strictly required if another egress method exists.
Force all egress through an HTTP proxy before reaching “GitHub” is incorrect because a proxy is optional. Runners can connect directly when outbound HTTPS is allowed and a proxy is only needed in environments that mandate one.
Identify the direction of traffic that the service uses. If the agent polls outbound for work then focus on allowing that egress rather than opening inbound or adding products that are optional.
An engineer at Alpine Fintech is creating GitHub Actions workflows that deploy to Google Cloud and is considering passing an API secret as a command line argument between jobs. Why should this approach be avoided?
-
✓ C. Command line arguments might be visible to other users and might also appear in logs or audit trails which could reveal the secret
The correct option is Command line arguments might be visible to other users and might also appear in logs or audit trails which could reveal the secret.
This is correct because operating systems commonly expose command line arguments through process inspection and these values can be read by other users on the same machine. Continuous integration systems also tend to capture commands and their parameters in job logs and audit trails, which can inadvertently disclose sensitive values. In GitHub Actions, workflow logs and debugging output can include command invocations, and masking only hides known secret values in logs and does not protect what is visible in the process list. A safer pattern is to use the platform secret store, pass secrets via environment variables or files with restricted permissions, or prefer short lived identity tokens rather than static secrets.
Command line tools never capture audit data so sensitive values cannot be recorded is incorrect because many tools and platforms do record command executions in logs and audits. GitHub Actions captures workflow commands and output, and operating systems and shells can retain histories or logs, so sensitive data may be recorded.
Shells and utilities automatically hide secret values that are provided as arguments is incorrect because shells do not automatically conceal arguments and many utilities echo their inputs. Arguments are also visible in the process list, so relying on automatic hiding is unsafe.
It is considered best practice to pass secrets as command line parameters during builds is incorrect because industry guidance recommends avoiding secrets in command line arguments. Use managed secrets, environment variables, or files with appropriate permissions, and prefer federated credentials when possible.
When evaluating options about secrets, look for clues about visibility and logging. If an option claims something is always safe or never recorded, treat it with skepticism and prefer answers that minimize exposure and use managed secrets.
In a GitHub Actions workflow for a repository owned by example.com, the workflow level environment sets NAME to ‘Release Runner’. The test job sets JAVA_VERSION to ’17’ and a step called Print info defines MESSAGE as ‘This step runs in the Test job’. That step runs the command echo “Hi $NAME. $MESSAGE. Java $JAVA_VERSION” when a push to the main branch triggers the workflow. What exact string does that step print to the log?
-
✓ D. Hi Release Runner. This step runs in the Test job. Java 17
The correct option is Hi Release Runner. This step runs in the Test job. Java 17.
This output is correct because the workflow level environment sets NAME to Release Runner so the shell expands $NAME to Release Runner. The job level environment sets JAVA_VERSION to 17 so $JAVA_VERSION expands to 17. The step named Print info sets MESSAGE to This step runs in the Test job so $MESSAGE expands to that sentence. When the step runs echo with those variables the shell performs normal environment variable expansion and produces the final string shown.
Cloud Build is unrelated to GitHub Actions and does not represent the output of the echo command in this workflow.
Hi Release Runner. $MESSAGE. Java 17 is incorrect because the step defines MESSAGE and the shell expands it to the full sentence rather than leaving the placeholder text.
Hi $NAME. $MESSAGE. Java $JAVA_VERSION is incorrect because all three variables are defined at workflow or job or step scope and the shell expands them so the placeholders would not appear in the log.
Trace variable scope from workflow to job to step and remember that the shell performs variable expansion at runtime. If a value is defined at any level then expect the placeholder to be replaced unless you purposely escape it.
BrightPixel Labs is consolidating its GitHub Actions pipeline for a service published to example.com. The team wants to coordinate which tasks can run at the same time and which must wait for others using explicit dependencies. In the workflow YAML, what does the jobs section define to enable this control?
-
✓ D. It defines one or more jobs with their step lists and allows jobs to run in parallel or wait on dependencies
The correct option is It defines one or more jobs with their step lists and allows jobs to run in parallel or wait on dependencies.
This choice is correct because the jobs section declares each job and the steps that run within that job. Jobs run in parallel by default, and you can enforce ordering by specifying dependencies so that some jobs wait for others to finish. This is done with the needs keyword which gives you explicit control over which tasks can run at the same time and which must wait.
Cloud Build is a Google Cloud service and it is not part of GitHub Actions workflow syntax, so it does not control job dependencies in a GitHub workflow.
It sets the human readable name of the workflow is incorrect because the workflow name is set by the name key at the top level, not by the jobs section.
It declares the workflow triggers is incorrect because triggers are defined by the on key, while the jobs section defines what jobs run and how they relate to one another.
Match each description to the workflow key. Triggers usually map to on, the display label maps to name, and coordination of execution order maps to jobs with needs. Remove answers that mention tools from another platform.
BrightCircuit plans to list a reusable GitHub Action on GitHub Marketplace, and the release checklist instructs the team to keep the repository limited to the action metadata, the implementation code, and only the files the action actually needs. What is the primary outcome of following this requirement when they publish the action?
-
✓ B. To satisfy GitHub Marketplace requirements so the action can be published immediately without manual review
The correct option is To satisfy GitHub Marketplace requirements so the action can be published immediately without manual review.
GitHub Marketplace expects an action repository to contain only the action metadata, the implementation code, and the files that are strictly required to run it. Meeting this packaging requirement aligns the project with Marketplace policies and enables automatic publishing without waiting for a human to review the repository. The primary outcome is compliance that unlocks streamlined publishing rather than performance gains or external integrations.
To reduce the repository footprint so developers can clone it more quickly is not the primary outcome. While a smaller repository can clone faster, the checklist requirement exists to satisfy Marketplace rules and allow automated publishing.
To improve the repository’s placement in GitHub search results is not correct because search ranking is not determined by stripping the repository to only action files. The requirement targets Marketplace compliance and publishing flow, not search optimization.
To integrate publishing with Cloud Build in Google Cloud projects is unrelated. Marketplace publishing is a GitHub process and is not tied to Google Cloud Build integrations.
When a requirement references Marketplace listing rules, map it to compliance and publishing behavior rather than performance or unrelated platform integrations. Focus on what enables automatic approval versus what would trigger manual review.
Your team at HarborPeak Labs is setting up a CI pipeline in GitHub Actions for a Python service and you are deciding when to use the “run” field in steps alongside reusable actions. What is a practical benefit of adding shell commands directly within job steps?
-
✓ C. Shell commands enable running tailored scripts and commands inside a job which provides flexibility
The correct option is Shell commands enable running tailored scripts and commands inside a job which provides flexibility.
Using the run field lets you write ad hoc commands that glue together reusable actions and cover gaps that a marketplace action does not address. You can set up the environment, run linters and tests, transform files, or invoke command line tools available on the runner. This approach gives precise control without needing to create a new action for every small task.
The statement Shell commands let you trigger a workflow manually from the GitHub Actions interface is incorrect. Manual runs are enabled by defining the workflow_dispatch event in the workflow file and are not related to adding run steps.
The statement Shell commands allow workflows to deploy directly to Google Cloud without any additional tools is incorrect. Deploying to Google Cloud requires credentials and appropriate tooling such as the gcloud CLI or official Google Cloud actions and shell commands alone do not remove those requirements.
The statement Shell commands eliminate the need for GitHub Secrets when handling sensitive configuration is incorrect. Secrets are still necessary and shell steps should consume sensitive values from secrets or another secure store rather than hard coding them.
Watch for answers that overpromise by replacing features like workflow_dispatch or secrets. For GitHub Actions questions, the run field usually signals flexibility to perform ad hoc setup and glue tasks around reusable actions.
NimbusRetail operates several self-hosted GitHub Actions runners on its own servers and on a virtual machine in Google Cloud, and the team signs in to the GitHub web interface to review each runner’s current state; which status values might appear for a self-hosted runner on that page? (Choose 3)
-
✓ A. offline
-
✓ C. active
-
✓ D. idle
The correct options are offline, active, and idle.
offline appears when the runner service is not connected to GitHub. This is what you see if the machine is shut down or the runner process is stopped, which means it cannot accept jobs.
active appears when the runner is currently executing a job. The status indicates that the machine is busy running a workflow and is not available for a new job until it finishes.
idle appears when the runner is online and healthy but is not running a job. It is connected and ready to pick up work.
overloaded is not a valid runner status on the GitHub runners page. If jobs are queued or capacity is limited, GitHub still shows the runner as idle when waiting or active when working, and it does not use the label overloaded.
Match the status words to the product UI and prefer the exact labels used in the docs. Distractors often sound plausible but are not the official terms shown in the interface.
You maintain a widely used GitHub Actions workflow for the engineering group at example.com and you have seen releases drift when tags are moved or deleted. What is the most reliable way to pin action versions so builds remain consistent and supply chain risks stay low?
-
✓ B. Pin each action reference to a full commit SHA so the target cannot change
The correct option is Pin each action reference to a full commit SHA so the target cannot change.
This approach uses an immutable identifier for each action so the referenced code cannot be altered after you choose it. It ensures reproducible builds and lowers supply chain risk because any change requires a new commit which produces a different identifier that you must intentionally adopt.
Use branch names for versioning to avoid tag churn is wrong because branches move with new commits and can be force pushed or deleted which means the referenced code can change underneath you and your builds will drift.
Binary Authorization is wrong because it is a deployment control for container images in certain platforms and it does not pin what GitHub Actions executes inside a workflow.
Pin to an abbreviated commit SHA to keep the reference concise is wrong because shortened identifiers can be ambiguous as the repository grows and GitHub Actions expects the full forty character commit identifier for reliable pinning. This does not meet the reliability and security goals.
When a question mentions supply chain risk or reproducible builds in GitHub Actions, prefer immutable identifiers. Choose a full commit SHA over tags or branches.
All exam question come from my GitHub Action Udemy course and certificationexams.pro
CedarPeak Analytics wants to free space in GitHub Actions by removing older workflow artifacts from a private repository, and the team is planning a cleanup of items from past runs. Before proceeding with the deletions, what key factor should they keep in mind to prevent unintended data loss?
-
✓ C. After deletion an artifact cannot be brought back
The correct option is After deletion an artifact cannot be brought back.
Deleting a GitHub Actions artifact is permanent, so once removed there is no recovery path. You should download or back up anything you might need before you clean up and you can also adjust retention settings to let GitHub expire artifacts automatically in the future.
You can open a support ticket to restore a deleted artifact within 48 hours is incorrect because GitHub does not offer restoration of deleted artifacts and Support cannot recover them after deletion.
You do not need write access to the repository to delete artifacts is incorrect because deleting artifacts requires repository write permission in both the user interface and the API.
Removing artifacts does not change your GitHub Actions storage usage is incorrect because artifacts consume Actions storage and deleting them reduces storage usage and can lower costs.
Scan for platform guarantees and permission cues. For GitHub Actions artifacts, deletions are irreversible, require write access, and reduce storage usage.
When creating a custom GitHub Action, which single filename should you use for the metadata file that defines the inputs, the outputs, and how the action runs?
-
✓ B. action.yml
The correct option is action.yml.
GitHub Actions reads action.yml at the root of the action directory to define the action metadata. This file declares the name and description along with inputs and outputs and the runtime configuration. It tells GitHub whether the action is JavaScript, a Docker container, or a composite action and it provides the entry point and how inputs map to arguments and outputs.
workflow.yaml is used for workflow files in .github/workflows and it defines repository automation rather than metadata for a reusable action.
requirements.yaml is not a GitHub Actions metadata file and it belongs to other ecosystems and it does not define action inputs, outputs, or runtime behavior.
action.json is not valid for action metadata since GitHub requires a YAML metadata file and JSON is not supported for this purpose.
When a question mentions a specific filename, match the scope. The metadata for a custom action uses action.yml, workflows live in .github/workflows, and dependency files belong to the language or container you use.
A development team at BlueRiver Labs uses GitHub Actions to run scripts in a monorepo and wants all run steps to use the same relative path without repeating the path in every step. What is the primary purpose of configuring a default working directory in the workflow?
-
✓ C. Set the repository path where run steps execute so relative paths resolve correctly
The correct option is Set the repository path where run steps execute so relative paths resolve correctly. This ensures every run step starts from the same base folder so the team does not need to repeat the path in each step of the workflow.
Configuring a default working directory makes all run steps inherit the same starting directory. In GitHub Actions this is done with the defaults for run which causes relative paths in commands and scripts to resolve from that directory. This simplifies monorepo workflows and reduces repetition and path errors.
Limit script access to only certain folders on the runner is incorrect because a working directory does not restrict filesystem access. It only sets the starting directory and does not act as a sandbox.
Choose the directory where command output files are stored after scripts run is incorrect because output locations are determined by the commands or by artifact upload steps. The working directory only defines where commands begin execution.
Improve job isolation to harden security for each script execution is incorrect because isolation comes from separate jobs, runners, containers, and permissions. The working directory setting does not change isolation.
When you see a need to avoid repeating paths across many run steps in a monorepo, look for the defaults.run.working-directory setting and clues like relative paths resolving correctly.
At Aurora Analytics your platform team manages GitHub Actions pipelines for twelve repositories. Each workflow repeats the same tasks to prepare environments, run unit and integration tests, and deploy services. Keeping these duplicated steps in every workflow is tedious and difficult to maintain. Which GitHub Actions capability should you use to package the shared steps so they can be reused by many workflows with minimal configuration?
-
✓ C. Create a composite action that groups common steps into a single reusable action
The correct option is Create a composite action that groups common steps into a single reusable action. This packages the repeated setup test and deployment steps into one versioned unit that workflows in many repositories can call with a few inputs which minimizes duplication and maintenance.
This approach keeps the shared logic close to the steps you want to reuse. It runs inside the caller job so teams can drop it into any workflow without restructuring their pipelines. You can define inputs and outputs to keep configuration minimal and you can version the action so updates are controlled and predictable.
Use a reusable workflow called with workflow_call to share a standard pipeline across repositories is better when you want to orchestrate whole workflows across jobs rather than embed a small set of steps inside an existing job. A reusable workflow is invoked as a separate workflow or job which adds structure that is not necessary when you only need to bundle common steps.
Create a Docker container action to isolate workflow commands inside a dedicated image is unnecessary when you simply want to group existing steps. A container action adds the overhead of building and publishing an image and it focuses on packaging tools rather than reducing step duplication across workflows.
Develop a JavaScript action to implement custom workflow logic in Node.js is suited for custom programmatic behavior. In this scenario you already have shell based steps to reuse and a JavaScript action would add code complexity without solving the need to group and reuse those steps.
When the question emphasizes reusing a sequence of steps inside jobs across many repositories think composite action. When it asks about sharing a full pipeline made of multiple jobs think reusable workflow.
An engineer at mcnz.com is reviewing a GitHub Actions workflow and wants to understand how default environment variables behave during a run. Which statement best describes their availability across the workflow?
-
✓ D. Default environment variables are predefined by GitHub and every step in the workflow can access them
The correct option is Default environment variables are predefined by GitHub and every step in the workflow can access them.
GitHub provides a standard set of variables for each job so every step can read them without extra configuration. These include items such as GITHUB_SHA and GITHUB_REF and they are available as normal shell environment variables. You can also read their values in expressions through contexts such as the github context, which confirms they are not limited to a special env context.
Default environment variables can be referenced only through the env context defined in the workflow file is incorrect because the defaults are real environment variables that steps can read directly, and they can also be accessed through contexts. The env context is useful for variables you define in the workflow but it is not the only way to read the defaults.
Default environment variables are created only when a step writes to GITHUB_ENV during the run is incorrect because the defaults exist before any step runs. Writing to GITHUB_ENV creates or updates custom variables for subsequent steps in the same job and does not create the defaults.
Default environment variables are visible only to steps that explicitly request them is incorrect because the defaults are automatically available to all steps in a job and do not require an opt in.
Differentiate default variables from those you create yourself. Defaults are available automatically to all steps, while variables written with GITHUB_ENV affect only later steps in the same job.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
