Kubernetes Services

Updated 1 week ago by Michael Cretzman

A Kubernetes Service represents the app/microservice you are deploying using Harness Pipelines.

Each Stage's Service Definition includes the manifests and artifacts for the Service you are deploying in a specific Stage.

Setting up your Kubernetes Service Definition involves the following steps:

Before You Begin

Step 1: Add Manifests for Your Service

In a CD Stage Service, in Manifests, you add the specific manifests and config files your Service Definition requires.

Harness supports a number of Kubernetes manifest types and orchestration methods.

Here are the supported manifest types and how to set them up.

Kubernetes Manifest

Harness supports Kubernetes deployments using Kubernetes manifests.

If this is your first time using Harness for a Kubernetes deployment, see Kubernetes CD Quickstart.

For a task-based walkthroughs of different Kubernetes features in Harness, see Kubernetes How-tos.

Add a Kubernetes Manifest

You can hardcode your artifact in your manifests, our add your artifact source to your Service Definition and then reference it in your manifests. See Reference Artifacts in Manifests.

Let's take a quick look at adding Kubernetes manifests to your Stage.

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select K8s Manifest, and then click Continue.

In Specify K8s Manifest Store, select the Git provider.

The settings for each Git provider are slightly different, but you simply want to point to the Git account For example, click GitHub, and then select or create a new GitHub Connector. See Connect to Code Repo.

Click ContinueManifest Details appears.

In Manifest Identifier, enter an Id for the manifest. It must be unique. It can be used in Harness expressions to reference this manifest's settings.

For example, if the Pipeline is named MyPipeline and Manifest Identifier were manifests, you could reference the Branch setting using this expression:

  • Within the Stage: <+serviceConfig.serviceDefinition.spec.manifests.values.spec.store.spec.branch>.
  • Anywhere in the Pipeline (the Stage name is deploy): <+pipeline.stages.deploy.spec.serviceConfig.serviceDefinition.spec.manifests.values.spec.store.spec.branch>.

If you selected a Connector that uses a Git account instead of a Git repo, enter the name of the repo where your manifests are located in Repository Name.

In Git Fetch Type, select Latest from Branch or Specific Commit ID, and then enter the branch or commit Id for the repo.

For Specific Commit ID, you can also use a Git commit tag.

In File/Folder Path, enter the path to the manifest file or folder in the repo. The Connector you selected already has the repo name, so you simply need to add the path from the root of the repo.

Click Submit. The manifest is added to Manifests.

Values YAML

Harness Kubernetes Services can use values YAML files just like you would using Helm. Harness manifests can use Go templating with your Values YAML files and you can include Harness variable expressions in the Values YAML files.

You cannot use Harness variables expressions in your Kubernetes object manifest files. You can only use Harness variables expressions in Values YAML files.
Add a Values YAML file

Where is your Values YAML file located?

  • Same folder as manifests: If you want to use a values file that is in the same folder as the Kubernetes manifests you selected in the Manifests File/Folder Path (described above), you can simply enter the folder path in File/Folder Path and Harness will fetch and apply the values file along with the manifests.
  • Separate from manifests: If your values file is located in a different folder, you can add it separately as a Values YAML manifest type, described below.

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select Values YAML, and click Continue.

In Specify Values YAML Store, select the Git repo provider you're using and then create or select a Connector to that repo. The different Connectors are covered in Connect to a Git Repo.

If you haven't set up a Harness Delegate, you can add one as part of the Connector setup. This process is described in Kubernetes CD QuickstartHelm CD Quickstart and Install a Kubernetes Delegate.

Once you've selected a Connector, click Continue.

In Manifest Details, you tell Harness where the values.yaml is located.

In Manifest Identifier, enter a name that identifies the file, like values.

If you selected a Connector that uses a Git account instead of a Git repo, enter the name of the repo where your manifests are located in Repository Name.

In Git Fetch Type, select a branch or commit Id for the manifest, and then enter the Id or branch.

For Specific Commit ID, you can also use a Git commit tag.

In File Path, enter the path to the values.yaml file in the repo.

You can enter multiple values file paths by clicking Add File. At runtime, Harness will compile the files into one values file.

If you use multiple files, the highest priority is given from the last file, and the lowest priority to the first file. For example, if you have 3 files and the second and third files contain the same key:value as the first file, the third file's key:value overrides the second and first files.

Click Submit.

The values file(s) are added to the Service.

Values files in both the Manifests and Values YAML

If you have Values files in both the K8s Manifest File/Folder Path and the Values YAML, the Values YAML will overwrite any matching values in the Values YAML in the Manifest File/Folder Path.

Helm Chart

Harness supports Helm Chart deployments. If this is your first time using Harness for a Helm Chart deployment, see Helm Chart CD Quickstart.

For a detailed walkthrough of deploying Helm Charts in Harness, including limitations and binary support, see Deploy Helm Charts. Here's a video walkthrough.

Add a Helm Chart

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select Helm Chart, and click Continue.

In Specify Helm Chart Store, select HTTP Helm Repository, a Git repo provider, or a cloud storage service (Google Cloud Storage, AWS S3) you're using.

For the steps and settings of each option, see the Connect to an Artifact Repo or Connect to a Git Repo How-tos.

If you are using Google Cloud Storage or Amazon S3, see Cloud Platform Connectors.

If you haven't set up a Harness Delegate, you can add one as part of the Connector setup. This process is described in Helm CD Quickstart and Install a Kubernetes Delegate.

Once your Helm chart is added, it appears in the Manifests section. For example:

Kustomize

Harness supports Kustomize deployments.

If this is your first time using Harness for a Kustomize deployment, see the Kustomize Quickstart.

For a detailed walkthrough of deploying Kustomize in Harness, including limitations, see Use Kustomize for Kubernetes Deployments.

Add a Kustomization

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, click Kustomize, and click Continue.

In Specify Manifest Type, select the Git provider.

In Manifest Details, enter the following settings, test the connection, and click Submit. We are going to provide connection and path information for a kustomization located at https://github.com/wings-software/harness-docs/blob/main/kustomize/helloWorld/kustomization.yaml.

  • Manifest Identifier: enter kustomize.
  • Git Fetch Type: select Latest from Branch.
  • Branch: enter main.
  • Kustomize Folder Path: kustomize/helloWorld. This is the path from the repo root.

The kustomization is now listed.

Kustomize Patches

Add Kustomize Patches

In the Stage's Service, in Manifests, click Add Manifest.

In Specify Manifest Type, select Kustomize Patches, and click Continue.

In Specify Kustomize Patches Store, select your Git provider and Connector. See Connect to a Git Repo.

The Git Connector should point to the Git account or repo where you Kustomize files are located. In Kustomize Patches you will specify the path to the actual patch files.

Click Continue.

In Manifest Details, enter the path to your patch file(s):

  • Manifest Identifier: enter a name that identifies the patch file(s). You don't have to add the actual filename.
  • Git Fetch Type: select whether to use the latest branch or a specific commit Id.
  • Branch/Commit Id: enter the branch or commit Id.
  • File/Folder Path: enter the path to the patch file(s) from the root of the repo. Click Add File to add each patch file. The files you add should be the same files listed in patchesStrategicMerge of the main kustomize file in your Service.
The order in which you add file paths for patches in File/Folder Path is the same order that Harness applies the patches during the kustomization build.
Small patches that do one thing are recommended. For example, create one patch for increasing the deployment replica number and another patch for setting the memory limit.

Click Submit. The patch file(s) is added to Manifests.

When the main kustomization.yaml is deployed, the patch is rendered and its overrides are added to the deployment.yaml that is deployed.

How Harness uses patchesStrategicMerge: If the patchesStrategicMerge label is missing from the kustomization YAML file, but you have added Kustomize Patches to your Harness Service, Harness will add the Kustomize Patches you added in Harness to the patchesStrategicMerge in the kustomization file. If you have hardcoded patches in patchesStrategicMerge, but not add these patches to Harness as Kustomize Patches, Harness will ignore them.

OpenShift Template

For an overview of OpenShift support, see Using OpenShift with Harness Kubernetes.

Add an OpenShift Template

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select OpenShift Template, and then click Continue.

In Specify OpenShift Template Store, select the Git provider where your template is located.

For example, click GitHub, and then select or create a new GitHub Connector. See Connect to Code Repo.

Click ContinueManifest Details appears.

In Manifest Identifier, enter an Id for the manifest. It must be unique. It can be used in Harness expressions to reference this template's settings.

For example, if the Pipeline is named MyPipeline and Manifest Identifier were myapp, you could reference the Branch setting using this expression:

<+pipeline.stages.MyPipeline.spec.serviceConfig.serviceDefinition.spec.manifests.myapp.spec.store.spec.branch>

In Git Fetch Type, select Latest from Branch or Specific Commit Id/Git Tag, and then enter the branch or commit Id/tag for the repo.

In Template File Path, enter the path to the template file. The Connector you selected already has the repo name, so you simply need to add the path from the root of the repo to the file.

Click Submit. The template is added to Manifests.

OpenShift Param

For an overview of OpenShift support, see Using OpenShift with Harness Kubernetes.

Add an OpenShift Param File

In your CD stage, click Service.

In Service Definition, select Kubernetes.

In Manifests, click Add Manifest.

In Specify Manifest Type, select OpenShift Param, and then click Continue.

In Specify OpenShift Param Store, select the Git provider where your param file is located.

For example, click GitHub, and then select or create a new GitHub Connector. See Connect to Code Repo.

Click ContinueManifest Details appears.

In Manifest Identifier, enter an Id for the param file. It must be unique. It can be used in Harness expressions to reference this param file's settings.

In Git Fetch Type, select Latest from Branch or Specific Commit Id/Git Tag, and then enter the branch or commit Id/tag for the repo.

In Paths, enter the path(s) to the param file(s). The Connector you selected already has the repo name, so you simply need to add the path from the root of the repo to the file.

Click Submit. The template is added to Manifests.

Option: Add the Primary Artifact Source

The Artifacts settings in the Service Definition allow you to select the artifacts for deployment instead of hardcoding them in your manifest and values YAML files.

Artifacts Overview

If a Docker image location is hardcoded in your Kubernetes manifest (for example, image: nginx:1.14.2), then you can simply add the manifest to Harness in Manifests and Kubernetes will pull the image during deployment.

Alternatively, you can add the image location to Harness as an artifact in the Artifacts.

This allows you to reference the image in your manifests and Values files using the Harness expression <+artifact.image>.

...
image: <+artifact.image>
...

You cannot use Harness variables expressions in your Kubernetes object manifest files. You can only use Harness variables expressions in Values YAML files.

When you select the artifact repo for the artifact, like a Docker Hub repo, you specify the artifact and tag/version to use. You can select a specific tag/version, use a Runtime Input so that you are prompted for the tag/version when you run the Pipeline, or you can use an Harness variable expression to pass in the tag/version at execution.

Here's an example where a Runtime Input is used and you select which image version/tag to deploy.

With a Harness artifact, you can template your manifests, detaching them from a hardcoded location. This makes your manifests reusable and dynamic.

In Artifacts, you add connections to the images in their repos.

In Artifacts, click Add Primary Artifact.

Select the Artifact Repository Type.

Docker

For details on all the Docker Connector settings, see Docker Connector Settings Reference.

Add an Artifact from a Docker Registry

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click Docker Registry, and then click Continue.

The Docker Registry settings appear.

Select a Docker Registry Connector or create a new one.

Click Continue.

In Image path, enter the name of the artifact you want to deploy, such as library/nginx.

In Tag, enter or select the Docker image tag for the image.

Click Submit.

The Artifact is added to the Service Definition.

Google Container Registry (GCR)

You connect to GCR using a Harness GCP Connector. For details on all the GCR requirements for the GCP Connector, see Google Cloud Platform (GCP) Connector Settings Reference.

Add an Artifact from GCR

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click GCR, and then click Continue.

In GCR Repository, select or create a Google Cloud Platform (GCP) Connector that connects to the GCP account where the GCR registry is located.

Click Continue.

In GCR Registry URL, select the registry where the artifact source is located.

In Image Path, enter the name of the artifact you want to deploy.

Images in repos need to reference a path starting with the project ID that the artifact is in, for example: myproject-id/image-name.

In Tag, enter or select the Docker image tag for the image or select Runtime Input or Expression.

If you use Runtime Input, when you deploy the Pipeline, Harness will pull the list of tags from the repo and prompt you to select one.

Click Submit.

The Artifact is added to the Service Definition.

Amazon Elastic Container Registry (ECR)

You connect to ECR using a Harness AWS Connector. For details on all the ECR requirements for the AWS Connector, see AWS Connector Settings Reference.

Add an Artifact from ECR

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click ECR, and then click Continue.

In ECR Repository, select or create an AWS Connector that connects to the AWS account where the ECR registry is located.

Click Continue.

In Artifact Details, select the region where the artifact source is located.

In Image Path, enter the name of the artifact you want to deploy.

In Tag, enter or select the Docker image tag for the image.

If you use Runtime Input, when you deploy the Pipeline, Harness will pull the list of tags from the repo and prompt you to select one.

Click Submit.

The Artifact is added to the Service Definition.

Azure Container Registry (ACR)

You connect to ACR using a Harness Azure Connector. For details on all the Azure requirements for the Azure Connector, see Add a Microsoft Azure Cloud Connector.

Add an Artifact from ACR

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click ACR, and then click Continue.

In ACR Repository, select or create an Azure Connector that connects to the Azure account where the ACR registry is located.

  • Azure ACR Permissions: make sure the Service Principal or Managed Identity has the required permissions.

Click Continue.

In Artifact Details, select the Subscription Id where the artifact source is located.

In Registry, select the ACR registry to use.

In Repository, select the repo to use.

In Tag, enter or select the tag for the image.

If you use Runtime Input, when you deploy the Pipeline, Harness will pull the list of tags from the repo and prompt you to select one.

Click Submit.

The Artifact is added to the Service Definition.

Nexus

You connect to Nexus using a Harness Nexus Connector. For details on all the requirements for the Nexus Connector, see Nexus Connector Settings Reference.

Add an Artifact from Nexus

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click Nexus, and then click Continue.

In Nexus Repository, select of create a Nexus Connector that connects to the Nexus account where the repo is located. Click Continue.

The Artifact Details settings appear.

Select Repository URL or Repository Port.

  • Repository Port is more commonly used and can be taken from the repo settings. Each repo uses its own port.
  • Repository URL is typically used for a custom infrastructure (for example, when Nexus is hosted behind a reverse proxy).

In Repository, enter the name of the repo.

In Artifact Path, enter the path to the artifact you want.

In Tag, enter or select the Docker image tag for the image.

If you use Runtime Input, when you deploy the Pipeline, Harness will pull the list of tags from the repo and prompt you to select one.

Click Submit.

The Artifact is added to the Service Definition.

Artifactory

You connect to Artifactory (JFrog) using a Harness Artifactory Connector. For details on all the requirements for the Artifactory Connector, see Artifactory Connector Settings Reference.

Add an Artifact from Artifactory

In Artifacts, click Add Primary Artifact.

In Artifact Repository Type, click Artifactory, and then click Continue.

In Artifactory Repository, select of create an Artifactory Connector that connects to the Artifactory account where the repo is located. Click Continue.

The Artifact Details settings appear.

In Repository URL, enter the URL from the docker login command in Artifactory's Set Me Up settings.

In Repository, enter the repo name. So if the full path is docker-remote/library/mongo/3.6.2, you would enter docker-remote.

In Artifact Path, enter the path to the artifact. So if the full path is docker-remote/library/mongo/3.6.2, you would enter library/mongo.

In Tag, enter or select the Docker image tag for the image.

If you use Runtime Input, when you deploy the Pipeline, Harness will pull the list of tags from the repo and prompt you to select one.

Click Submit.

The Artifact is added to the Service Definition.

Reference Artifacts in Manifests

Once you have added an artifact to the Artifacts section of the Service, you need to reference that artifact in the Values YAML file added in Manifests.

You cannot use Harness variables expressions in your Kubernetes object manifest files. You can only use Harness variables expressions in Values YAML files.
Referencing Artifacts in Manifests

To reference this artifact, in the Values YAML file, you reference the image in the Service Definition Artifacts section using the Harness variable <+artifact.image>.

For example, here's a reference in a Values file:

...
name: myapp
replicas: 2

image: <+artifact.image>
...

That <+artifact.image> references the artifact listed as Primary in Artifacts. At deployment runtime, Harness resolves <+artifact.image> to the image from your artifact source.

In your Kubernetes manifests, you simply use a standard Go template reference to the image value from your values file: {{.Values.image}}:

apiVersion: apps/v1
kind: Deployment
...
spec:
{{- if .Values.dockercfg}}
imagePullSecrets:
- name: {{.Values.name}}-dockercfg
{{- end}}
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
...

See Example Manifests for more details.

If an artifact expression is a Values YAML file or Execution step, you will be prompted to select an artifact at runtime. This is true even if the Stage does not deploy an artifact (such as a Custom Stage or a Stage performing a Kustomize deployment). If you want to reference an artifact that isn't the primary deployment artifact without being prompted, you can use an expression with quotes, like docker pull <+artifact<+".metadata.image">>.

Go Templating

Harness supports Go templating for Kubernetes manifests. So you can add one or more Values YAML files containing values for different scenarios, and then use Go templating in the manifest files to reference the values in the Values YAML files.

Built-in Go templating support enables you to use Kubernetes without the need for Helm.

Let's look at a few Kubernetes templating examples.

Basic Values YAML and Manifests for Public Image

Here's the values YAML file:

name: <+stage.name>
replicas: 2

image: <+artifact.image>
# dockercfg: <+artifact.imagePullSecret>

createNamespace: true
namespace: <+infra.namespace>

serviceType: LoadBalancer

servicePort: 80
serviceTargetPort: 80

env:
config:
key1: value10
secrets:
key2: value2

Here's the manifest containing multiple objects referring to the values in the values YAML file:

{{- if .Values.env.config}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Values.name}}
data:
{{.Values.env.config | toYaml | indent 2}}
---
{{- end}}

{{- if .Values.env.secrets}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}
stringData:
{{.Values.env.secrets | toYaml | indent 2}}
---
{{- end}}

{{- if .Values.dockercfg}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}-dockercfg
annotations:
harness.io/skip-versioning: true
data:
.dockercfg: {{.Values.dockercfg}}
type: kubernetes.io/dockercfg
---
{{- end}}

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment
spec:
replicas: {{int .Values.replicas}}
selector:
matchLabels:
app: {{.Values.name}}
template:
metadata:
labels:
app: {{.Values.name}}
spec:
{{- if .Values.dockercfg}}
imagePullSecrets:
- name: {{.Values.name}}-dockercfg
{{- end}}
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
{{- if or .Values.env.config .Values.env.secrets}}
envFrom:
{{- if .Values.env.config}}
- configMapRef:
name: {{.Values.name}}
{{- end}}
{{- if .Values.env.secrets}}
- secretRef:
name: {{.Values.name}}
{{- end}}
{{- end}}

Pull an Image from a Private Registry

Typically, if the Docker image you are deploying is in a private registry, Harness has access to that registry using the credentials set up in the Harness Connector.

If some cases, your Kubernetes cluster might not have the permissions needed to access a private Docker registry. For these cases, the Values YAML file in Service Definition Manifests section must use the dockercfg parameter.

Use dockercfg in Values YAML

If the Docker image is added in the Service Definition Artifacts section, then you reference it like this: dockercfg: <+artifact.imagePullSecret>.

This key will import the credentials from the Docker credentials file in the artifact.

Open the values.yaml file you are using for deployment.

Verify that dockercfg key exists, and uses the <+artifact.imagePullSecret> expression to obtain the credentials:

name: <+stage.variables.name>
replicas: 2

image: <+artifact.image>
dockercfg: <+artifact.imagePullSecret>

createNamespace: true
namespace: <+infra.namespace>
...

Reference dockercfg in Kubernetes Objects

Next, verify that the Deployment and Secret objects reference dockercfg: {{.Values.dockercfg}}.

...
{{- if .Values.dockercfg}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}-dockercfg
annotations:
harness.io/skip-versioning: true
data:
.dockercfg: {{.Values.dockercfg}}
type: kubernetes.io/dockercfg
---
{{- end}}

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment
spec:
replicas: {{int .Values.replicas}}
selector:
matchLabels:
app: {{.Values.name}}
template:
metadata:
labels:
app: {{.Values.name}}
spec:
{{- if .Values.dockercfg}}
imagePullSecrets:
- name: {{.Values.name}}-dockercfg
{{- end}}
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
...

With these requirements met, the cluster imports the credentials from the Docker credentials file in the artifact.

Option: Add Sidecars

You can use Harness to deploy both primary and sidecar Kubernetes workloads. Sidecar containers are common where you have multiple colocated containers that share resources.

See Add a Kubernetes Sidecar Container.

Additional Settings and Options

This topic has covered the Kubernetes Service basics to get your started, but we've only scratched the surface of what you have do in Harness.

Once you're comfortable with the basics, here's some more options for you to review.

Ignore a Manifest File During Deployment

You might have manifest files for resources that you do not want to deploy as part of the main deployment.

Instead, you can tell Harness to ignore these files and then apply them separately using the Harness Apply step. Or you can simply ignore them and deploy them later.

See Ignore a Manifest File During Deployment and Kubernetes Apply Step.

Harness Pipeline, Stage, Service, and Built-in Variables

You can use Pipeline, Stage, Service, and Built-in variables in your values YAML files and Service settings.

See Built-in Harness Variables Reference or watch this short video.

Propagate and Override Artifacts, Manifests, and Service Variables

See Add and Override Values YAML Files.

Next Steps

Once you've configured your Service, you can move onto the Stage's Infrastructure settings and define the target Kubernetes cluster and namespace for your deployment.

See Define Your Kubernetes Target Infrastructure.

See Also


Please Provide Feedback