GitHub Actions#
GitHub Actions is a powerful automation platform that can be used to implement Continuous Integration (CI) and Continuous Deployment (CD) workflows within your GitHub repository.
Note
The following assumes that you have a GitHub repository created, and you are working inside that repository, for implementing GitHub Actions.
Creating Workflows#
A GitHub Actions workflow is a customizable collection of one or more jobs that are defined in a YAML file. In order to recognize these different workflows GitHub actions looks to a specific directory in the base directory of repositories. To get started let’s create this directory in the root directory of our repository with the following command :
mkdir -p .github/workflows
All our workflows YAML definition files will be placed in here. There is no limit to how many different workflows we can define but there is a limit, that depends on your particular GitHub plan, on how many jobs can be run simultaneously. A workflow has to contain a few basic components. An event to trigger the workflow, and one or more jobs to run when the workflow is triggered.
GitHub Provided Runners#
Build Docker image#
GitHub actions can be utilized to build Docker images whenever new code is pushed to a specific directory and then push that image to a container registry like Docker Hub.
This example workflow utilizes a few different github actions to build a docker image and push it to a container registry. Each step is explained in detail via inline comments.
Note
The workflow examples include a job that reads secret information stored in the GitHub repository. Here is a link to information on using secrets in GitHub Actions.
Example .github/workflows/build-push-docker.yaml#
# This workflow builds docker images and pushes them to a Docker Hub Repository
# Set the workflow name
name: Build & Push Docker Image
# Define the trigger that starts the action
# For this workflow the trigger is on a push that changes anything in the web-app/ path
on:
push:
paths:
- web-app/**
# Define the actions that are going to take place as part of this workflow
jobs:
# Name the job(s)
build-push-docker-image:
# Define where the job should run in this case it will be run on the latest ubuntu image
runs-on: ubuntu-latest
# Set the steps to take in order
steps:
# Step 1 is to checkout the github repo used to build the Dockerfile
- name: Check out the repo
uses: actions/checkout@v3
# Step 2 is to login to docker hub so the image can be pushed
- name: Login to Docker Hub
uses: docker/login-action@v2
# GitHub secrets are used to provide login information to docker hub
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Pull relevant metadata out of the docker image used
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ncote/web-app
# Get the date to apply to image tag
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d.%H')" >> $GITHUB_OUTPUT
# Build and push the docker image
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
# Provide the current directory as build context
context: .
# Specify where the Dockerfile is located in relation to the repo base path
file: Dockerfile
# Enable the push to docker hub
push: true
# Provide the tags to apply to the image, this example uses the date and time for an image tag
tags: |
ncote/web-app:${{ steps.date.outputs.date }}
# Apply labels as defined in the Docker image metadata
labels: ${{ steps.meta.outputs.labels }}
Update Helm Chart#
The CISL Cloud utilizes Argo CD to sync an applications Helm chart hosted in a code repository whenever changes are made. Once the Helm chart has been initially deployed, a GitHub job can be setup to update the Helm chart to include the newest image build and tag for your application. This enables CICD so that when changes are made to the source code GitHub actions builds a new image with the changes. It then updates the Helm chart, and subsequently Argo CD, with that new image and automatically has updated your K8s hosted application.
Example .github/workflows/web-app-cisl-cicd.yaml#
# This workflow builds docker images and pushes them to a Docker Hub Repository
# Set the workflow name
name: CISL Cloud CICD Workflow
# Define the trigger that starts the action
# For this workflow the trigger is a push that changes anything in the web-app/ path on the repositories main branch
on:
push:
paths:
- web-app/**
branches:
- main
# Define the actions that are going to take place as part of this workflow
jobs:
# Name the job(s)
web-app-cicd:
# Define where the job should run.
# This example runs on a GitHub hosted system using the latest ubuntu as the OS
runs-on: ubuntu-latest
# Set the steps to take in order
steps:
# Step 1 is to checkout the github repo used to build the Dockerfile
- name: Checkout the repo
uses: actions/checkout@v3
# Step 2 is to login to docker hub so the image can be pushed
- name: Login to Docker Hub
uses: docker/login-action@v2
# GitHub secrets are used to provide login information to docker hub
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Step 3 gets the current date and time down to the minute
# This is used as the tag for our docker images to have versioning
# Note: Avoid using latest as your tag because it never changes
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d.%H.%M')" >> $GITHUB_OUTPUT
# Step 4 builds and pushes a container image based on the Dockerfile in the base directory
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
# Provide the current directory as build context
context: .
# Specify where the Dockerfile is located in relation to the repo base path
file: Dockerfile
# Enable the push to docker hub
push: true
# Provide the tags to apply to the image
# This example uses the current date and time, down to the minute, as the image tag
tags: |
ncote/web-app-image:${{ steps.date.outputs.date }}
# Step 5 uses sed to replace the image: line, in the Charts/helm-chart/values.yaml file, with the new image used
# Note that sed is finicky and should be tested to work properly with your Helm chart configuration
- name: Update Helm values.yaml
run: |
sed -i "/web-app-image/ c\ image: ncote/web-app-image:${{ steps.date.outputs.date }}" Charts/helm-chart/values.yaml
# Step 6 uses sed to replace the appVersion line, in the Charts/helm-chart/Chart.yaml file, with the new version (current date and time)
- name: Update Helm Chart.yaml
run: |
sed -i "/appVersion:/ c\appVersion: ${{ steps.date.outputs.date }}" Charts/helm-chart/Chart.yaml
# Step 7 runs a python script that updates the Semantic version in Charts/helm-chart/Chart.yaml by 1
- name: Run python script to update minor version by 1
run: python scripts/update_ver.py
# Step 8 pushes the Helm chart changes to GitHub, triggering a sync with Argo CD, and ultimately updating the application
# Argo CD runs a scan against a configured repository every 3 minutes. Wait times to see changes updated should be 3 minutes or less
- name: Push changes to GitHub
run: |
git config --global user.email "$GITHUB_ACTOR@users.noreply.github.com"
git config --global user.name "$GITHUB_ACTOR"
git commit -a -m "Update Helm chart via GH Actions"
git push
The example runs scripts/update_ver.py
to update only the Semantic Minor version. It could be altered to change the Major and Patch versions as well. Below is the code contained in that script:
import fileinput
with open(r'Charts/helm-chart/Chart.yaml', 'r') as chart:
data = chart.readlines()
num = -1
for line in data:
line = line.replace('\n','')
num = num + 1
if 'version:' in line:
version = line
ver = line.split('.')
# ver[0] is "version: 0" where 0 is the Major version
# ver[1] is the Minor version
# ver[2] is the Patch version
ver[1] = str(int(ver[1]) + 1)
new_ver = '.'.join(ver)
new_ver = new_ver + '\n'
data[num] = line.replace(version, new_ver)
with open('Charts/helm-chart/Chart.yaml', 'w') as chart:
chart.write(''.join(data))
Self-Hosted Runners#
Note
The self-hosted runner image listed in the repository below currently builds container images with Podman and does not require root access.
There is an opportunity to deploy GitHub Runners for a repository that are hosted on the on-premise cloud hardware. It requires a new kubernetes deployment object to be created either via Helm chart or kubernetes manifest. A containerized image has been created and can be used as a template for new runners. It requires the repo location and an API key to be provided as arguments. The code can be viewed at this link to a runner GitHub repository. All the required details and instructions to implement a new self hosted runner can be viewed in that repositories README file.
Pushing to hub.k8s.ucar.edu#
Note
When using the internal container registry Harbor to push container images to a project a robot account should be created and used to authenticate. More information can be found at this link to using robot accounts
Because the self-hosted runners are located on-premise they have access to an internal container registry built on Harbor and located at [hub.k8s.ucar.edu]. In the GitHub Actions workflow include a job to login to Harbor with a robot account, use Repository Secrets for the password, and another job to then push the image to Harbor. There is an example workflow that includes this in the GitHub runner repository linked above