Deploy Helm Charts

Updated 2 months ago by Michael Cretzman

You can deploy Helm charts in standard Helm syntax in YAML from a remote Git repo, HTTP Helm Repository, or cloud storage service (Google Cloud Storage, AWS S3).

This process is also covered in the Helm CD Quickstart.

In this topic:

Before You Begin

Supported Platforms and Technologies

See Supported Platforms and Technologies.

Review: Artifacts and Helm Charts

Harness supports image artifacts with Helm charts in the following ways.

Helm Chart with Hardcoded Artifact

The image artifact is identified in the Helm chart values.yaml file. For example:

...
containers:
- name: nginx
image: docker.io/bitnami/nginx:1.21.1-debian-10-r0
...

If the image is hardcoded then you do not use the Artifacts section of the Service. Any artifacts added here are ignored.

Helm Chart using Artifact Added to the Stage

You add an image artifact to the Artifacts section of the Service and then reference it in the Helm chart values.yaml file.

Artifacts in the Artifacts section are referenced using the <+artifact.image> expression. For example:

...
image: <+artifact.image>
pullPolicy: IfNotPresent
dockercfg: <+artifact.imagePullSecret>
...

This is the same method when using Artifacts with standard Kubernetes deployments. See Add Container Images as Artifacts for Kubernetes Deployments.

Step 1: Add the Helm Chart

Adding a Helm chart is a simple process of connecting Harness to the Git or HTTP Helm repo where your chart is located.

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select Helm Chart, and click Continue.

In Specify Helm Chart Store, select the type of repo or or cloud storage service (Google Cloud Storage, AWS S3) you're using.

For the steps and settings of each option, see the Connect to an Artifact Repo How-tos.

If you are using Google Cloud Storage or Amazon S3, see Cloud Platform Connectors.

If you haven't set up a Harness Delegate, you can add one as part of the Connector setup. This process is described in Helm CD Quickstart and Install a Kubernetes Delegate.

Once your Helm chart is added, it appears in the Manifests section. For example:

Option: Reference the Artifact

If the image artifact is not hardcoded in the Helm chart, add the artifact in Artifacts and use the expression <+artifact.image> in your values.yaml. For example:

...
image: <+artifact.image>
pullPolicy: IfNotPresent
dockercfg: <+artifact.imagePullSecret>
...

This is the same method when using Artifacts with standard Kubernetes deployments. See Add Container Images as Artifacts for Kubernetes Deployments.

Option: Override Chart Values YAML in Service

You can override the values YAML in the Helm chart by adding a values YAML in Manifests.

You add values YAML files in the same way you added your chart. You simply select Values YAML in Specify Manifest Type.

In Manifest Details, you enter the path to each values.yaml file. You can add multiple files.

If you use multiple files, priority is given from the last file to the first file. For example, if you have 3 files and the second and third files contain the same key:value as the first file, the third file's key:value overrides the second and first files.

Your values.yaml file can use Go templating and Harness built-in variable expressions in combination in your Manifests files.

See Example Kubernetes Manifests using Go Templating.

Option: Override Chart Values YAML in Environment

You can override the values YAML file for a stage's Environment by mapping the Environment name to the values file or folder. Next, you use the <+env.name> Harness expression in the values YAML path.

Let's look at an example.

Here is a repo with three values files, dev.yaml, qa.yaml. prod.yaml. In the File Path for the values file, you use the <+env.name> expression. Next, in the Environment setting, you add three Environments, one for each YAML file name.

When you select an Environment, such as qa, the name of the Environment is used in File Path and resolves to qa.yaml. At runtime, the qa.yaml values file is used, and it overrides the values.yaml file in the chart.

Instead of selecting the Environment in the Infrastructure each time, you can set the Environment as a Runtime Input and then enter dev, qa, or prod at runtime.

Option: Override Chart Values YAML at Runtime

You can make the values file path a Runtime Input and simply enter the name of the values file when you run the Pipeline.

In Manifest Details for the values file, in File Path, select Runtime Input. At runtime, you simply enter the name of the values file to use.

The values file you specify at runtime will override the values.yaml in the chart.

Step 2: Define the Infrastructure and Execution

There is nothing unique about defining the target cluster Infrastructure Definition for a Helm chart deployment. It is the same process as a typical Harness Kubernetes deployment.

See Define Your Kubernetes Target Infrastructure.

Helm charts can be deployed using any of the Execution steps and deployment strategies used in other Kubernetes deployments. See Kubernetes How-tos.

Step 3: Deploy

Each Helm chart deployment is treated as a release. During deployment, when Harness detects that there is a previous release for the chart, it upgrades the chart to the new release.

In your Pipeline, click Run.

The Helm chart deployment runs.

You will see Harness fetch the Helm chart. Here is an example:

Helm repository: Bitnami Helm Repo

Chart name: nginx

Chart version: 9.4.1

Helm version: V3

Repo url: https://charts.bitnami.com/bitnami

Successfully fetched values.yaml

Fetching files from helm chart repo

Helm repository: Bitnami Helm Repo

Chart name: nginx

Helm version: V3

Repo url: https://charts.bitnami.com/bitnami

Successfully fetched following files:

- nginx/.helmignore
- nginx/charts/common/.helmignore
- nginx/charts/common/templates/validations/_postgresql.tpl
- nginx/charts/common/templates/validations/_cassandra.tpl
- nginx/charts/common/templates/validations/_mongodb.tpl
- nginx/charts/common/templates/validations/_mariadb.tpl
- nginx/charts/common/templates/validations/_validations.tpl
- nginx/charts/common/templates/validations/_redis.tpl
- nginx/charts/common/templates/_ingress.tpl
- nginx/charts/common/templates/_names.tpl
- nginx/charts/common/templates/_affinities.tpl
- nginx/charts/common/templates/_storage.tpl
- nginx/charts/common/templates/_utils.tpl
- nginx/charts/common/templates/_errors.tpl
- nginx/charts/common/templates/_capabilities.tpl
- nginx/charts/common/templates/_secrets.tpl
- nginx/charts/common/templates/_warnings.tpl
- nginx/charts/common/templates/_tplvalues.tpl
- nginx/charts/common/templates/_images.tpl
- nginx/charts/common/templates/_labels.tpl
- nginx/charts/common/Chart.yaml
- nginx/charts/common/values.yaml
- nginx/charts/common/README.md
- nginx/Chart.lock
- nginx/templates/svc.yaml
- nginx/templates/health-ingress.yaml
- nginx/templates/ldap-daemon-secrets.yaml
- nginx/templates/tls-secrets.yaml
- nginx/templates/NOTES.txt
- nginx/templates/pdb.yaml
- nginx/templates/ingress.yaml
- nginx/templates/server-block-configmap.yaml
- nginx/templates/serviceaccount.yaml
- nginx/templates/hpa.yaml
- nginx/templates/servicemonitor.yaml

Done.

Next, Harness will initialize and prepare the workloads, apply the Kubernetes manifests, and wait for steady state.

In Wait for Steady State you will see the workloads deployed and the pods scaled up and running (the release name has been shortened for readability):

kubectl --kubeconfig=config get events --namespace=default --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,NAMESPACE:.involvedObject.namespace,MESSAGE:.message,REASON:.reason --watch-only

kubectl --kubeconfig=config rollout status Deployment/release-e008...ee-nginx --namespace=default --watch=true

Status : release-e008...ee-nginx Waiting for deployment spec update to be observed...

Event : release-e008...ee-nginx Deployment release-e008...ee-nginx default Scaled up replica set release-e008...ee-nginx-779cd786f6 to 1 ScalingReplicaSet

Status : release-e008...ee-nginx Waiting for deployment spec update to be observed...

Status : release-e008...ee-nginx Waiting for deployment "release-e008...ee-nginx" rollout to finish: 0 out of

Event : release-e008...ee-nginx ReplicaSet release-e008...ee-nginx-779cd786f6 default Created pod: release-e008...ee-nginx-779n765l SuccessfulCreate

Status : release-e008...ee-nginx Waiting for deployment "release-e008...ee-nginx" rollout to finish: 0 of 1 updated replicas are available...

Event : release-e008...ee-nginx Pod release-e008...ee-nginx-779n765l default Successfully assigned default/release-e008...ee-nginx-779n765l to gke-doc-account-default-pool-d910b20f-argz Scheduled

Event : release-e008...ee-nginx Pod release-e008...ee-nginx-779n765l default Pulling image "docker.io/bitnami/nginx:1.21.1-debian-10-r0" Pulling

Event : release-e008...ee-nginx Pod release-e008...ee-nginx-779n765l default Successfully pulled image "docker.io/bitnami/nginx:1.21.1-debian-10-r0" in 3.495150157s Pulled

Event : release-e008...ee-nginx Pod release-e008...ee-nginx-779n765l default Created container nginx Created

Event : release-e008...ee-nginx Pod release-e008...ee-nginx-779n765l default Started container nginx Started

Status : release-e008...ee-nginx deployment "release-e008...ee-nginx" successfully rolled out

Done.

You deployment is successful.

Versioning and Rollback

Helm chart deployments support versioning and rollback in the same way as standard Kubernetes deployments.

See Kubernetes Rollback.


Please Provide Feedback