Updated 1 month ago by Michael Cretzman

This topic contains general troubleshooting information for error messages and other issues that can arise.

If you cannot find a resolution, please contact Harness Support or Harness Community Forum.

In this topic, you will find help with the following:

Login Issues

The following issues can occur when logging into Harness.

Logged Out Automatically

You are logged out of your Harness Manager session automatically, forcing you to log back in.

If you log out of Harness Manager in one browser tab, Harness might log you out of all tabs.

Typically, the solution is to clear local storage.

Troubleshooting Steps
  1. Log out of the Harness Manager from all Chrome tabs. (Harness only supports the Chrome desktop browser.)
  2. Clear Chrome Local Storage for in chrome://settings/siteData.
  3. Open a new tab and log into the Harness Manager.

You should not be logged out anymore.

  • Chrome Session Storage is used by the Harness Manager. If you close all the tabs running the Harness Manager and then open a new tab running Harness Manager, you will likely need to log in again.
  • A Chrome session will timeout after 5 minutes, but a session timeout can also happen if the tab running Harness Manager is idle for 24 hours. However, as long as the tab is not closed, Harness Manager will continue keep polling to check if a refresh is needed for the token. For example, if you have kept the tab open for 3 days, you might still be logged in, as long as the workstation has not been turned off or entered sleep mode preventing the refresh.

Delegate Issues

Harness Delegates run as a service in your deployment or build farm environment, on a host, a pod, a container, or as a task. They make outbound HTTPS connections over port 443 and use the credentials you provide in Harness Connectors such as Cloud Providers and Artifact Servers to run remote SSH and API calls.

See Delegates Overview.

Most Delegate issues arise from network connectivity where the Delegate is unable to connect to a cloud provider, artifact server, etc, because of network issues like port changes and proxy settings.

Some issues arise from invalid credentials due to expiry or access issues resulting from missing policies or cross project requirements in a cloud vendor.

The simplest way to detect if an issue is caused by Delegate connectivity is to run a cURL command on the Delegate host/pod and see if it works. If it does, the next step is to look at the credentials.

The following sections provide solutions to Delegate issues.

Failed to assign any Delegate to perpetual task

Harness does many background operations on a regular basis, such as collecting information about your cluster and deployed software. This ensures that the number of instances we report is correct, among other information.

This error message is related to these background operations. Subsequent, scheduled attempts typically clears these messages.

If these errors clear, typically a local or remote networking or similar issue is the cause.

Duplicate Output in Deployment Logs

This is a symptom of running duplicate Delegates. We call this the double Delegate problem.

If two Harness Delegates with the same name are running in different clusters, they will show up as one Delegate in the Harness Manager. This will make it seem as though only one Delegate is running.

Do not run Delegates with the same name in different clusters. Replace one of the Delegates and the issue will go away.

You might see errors such as IllegalArgumentException and multiple Initializing and Rendering lines:


Rendering manifest files using go template
Only manifest files with [.yaml] or [.yml] extension will be processed


Rendering manifest files using go template

Only manifest files with [.yaml] or [.yml] extension will be processed

IllegalArgumentException: Custom Resource Definition Optional[] is not found in cluster


Running Multiple Delegates on the Same Host

If deployment entities are getting added and removed in the same deployment, you might have two Delegates running on the same host.

Do not run multiple Delegates on the same host/pod/container. This will result in the Delegates overwriting each other's tasks.

Delegate Setup

Most often, Delegate errors are the result of Delegate setup issues. Ensure you are familiar with how the Delegate and Harness Manager work together. See Delegate Installation Overview.

Another common issue is the SSH key used by the Delegate to deploy to a target host is incorrect. This can happen if the SSH key in Harness Secrets Management was set up incorrectly, or if it is not the correct key for the target host, or the target host is not set up to allow SSH connections.

The Delegate is monitored locally using its Watcher component. The Watcher component has a watcher.log file that can provide Delete version information for troubleshooting.

Delegate Connection Failures To Harness Manager

If the Delegate cannot connect to the Harness Manager, try the following:

  1. Use ping on the Delegate host to test if response times for or another URL are reasonable and consistent.
  2. Use traceroute on to check the network route.
  3. Use nslookup to confirm that DNS resolution is working for
  4. Connect using the IP address for (get the IP address using nslookup), for example:
  5. Flush the client's DNS cache
    1. Windows: ipconfig /flushdns
    2. Mac/Linux: sudo killall -HUP mDNSResponder;sudo killall mDNSResponderHelper;sudo dscacheutil -flushcache
  6. Check for local network issues, such as proxy errors or NAT license limits.
  7. For some cloud platforms, like AWS EC2, ensure that security groups allow outbound traffic on HTTPS 443.
  8. Try a different workstation or a smartphone to confirm the connection issue is not local to a single host.

Delegate Successes Followed By Failures

If you have incorrectly used the same Kubernetes Delegate YAML file for multiple Delegates, you will see Delegate successes followed by failures in the Delegate logs. This sequence is the result of one Delegate succeeding in its operation and the same operation failing with the second Delegate.

To avoid any Delegate conflicts, always use a new Kubernetes Delegate YAML download for each Delegate you install, and a unique name.

For Kubernetes Delegates, you can increase the number of replicas run using a single Delegate download YAML file (change the replicas setting in the file), but to run multiple Delegates, use a new Delegate download from Harness for each Delegate.

No Delegates Could Reach The Resource

This error means that no Delegate could meet the URL criteria for validation. For more information, see How Does Harness Manager Pick Delegates?.

Google Cloud Platform: Cluster has unschedulable pods

If you do not have enough space available in your Kubernetes cluster, you might receive the following error:


Depending on the size of your cluster, without Autoscaling enabled or enough space, your cluster cannot run the delegate.


Add more space or turn on Autoscaling, wait for the cluster to restart, reconnect to the cluster, and then rerun the command:

$ kubectl apply -f harness-delegate.yaml

For more information, see Autoscaling Deployments from Google.

Deleting a Kubernetes Delegate

In the case where you have to delete a Harness Delegate from your Kubernetes cluster, you can delete the StatefulSet for the Delegate.

Once created, the StatefulSet ensures that the desired number of pods are running and available at all times. Deleting the pod without deleting the StatefulSet will result in the pod being recreated.

For example, if you have the Delegate pod name mydelegate-vutpmk-0, you can delete the StatefulSet with the following command:

$ kubectl delete statefulset -n harness-delegate mydelegate-vutpmk

Note that the -0 suffix in the pod name is removed for the StatefulSet name.

Self Destruct Sequence Initiated

This very rare error can be noticed in Delegate logs:

Sending heartbeat...

Delegate 0000 received heartbeat response 0s after sending. 26s since last response.

Self destruct sequence initiated...

Delegate self-destructing because are two Delegates with same name, probably deployed to two different clusters.


Remove one Delegate. Typically, one Delegate is in the wrong cluster. Remove that Delegate.

Need to Use Long Polling for Delegate Connection to Harness Manager

By default, the Harness Delegate connects to the Harness Manager over a TLS-backed WebSocket connection, sometimes called a Secure WebSocket connection, using the wss:// scheme (RFC 6455).

Some network intermediaries, such as transparent proxy servers and firewalls that are unaware of WebSocket, might drop the WebSocket connection. To avoid this uncommon error, you can instruct the Delegate to use long polling.

To set up the Delegate to use long polling, you use the Delegate YAML file.

For a Kubernetes Delegate, you can set the POLL_FOR_TASKS setting to true in the harness-delegate.yaml file:

value: "true"

KubernetesClientException: Operation: [list] for kind: [Deployment] with name: [null] in namespace: [default] failed

If you have a proxy set up on the network where the Harness Kubernetes Delegate is running, you need to add the cluster master hostname or IP in the Delegate harness-delegate.yaml NO_PROXY list.

For example, you might see a log error like this:

io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list]  for kind: [Deployment]  with name: [null]  in namespace: [default]  failed.
  1. Obtain the cluster master hostname or IP (kubectl cluster-info).
  2. Open the harness-delegate.yaml you used to create the Delegate, and add the cluster master hostname or IP to the NO_PROXY setting in the StatefulSet spec:
        - name: NO_PROXY
value: ""
  1. Apply harness-delegate.yaml again to restart the Kubernetes Delegate (kubectl apply -f harness-delegate.yaml).

Artifact Collection

This section lists common errors you might receive when Harness attempts to collect artifacts.

Stage Hanging on Artifact Collection

If a Delegate has been offline for an extended period of time, you might need to reset the Harness Connector credentials.

Common Errors and Alerts

This section lists common error and alert messages you might receive.

No Delegates Could Reach The Resource

This error means that no Delegate could meet the URL criteria for validation. When a task is ready to be assigned, the Harness Manager first validates its lists of Delegates to see which Delegate should be assigned the task. It validates the Delegate using the URL in the task, such as a API call or SSH command. See  How Does Harness Manager Pick Delegates?.

Harness SecretStore Is Not Able to Encrypt/Decrypt

Error message:

Secret manager Harness SecretStore of type KMS is not able to encrypt/decrypt. Please check your setup

This error results when Harness Secret Manager (named Harness SecretStore) is not able to encrypt or decrypt keys stored in AWS KMS. The error is usually transitory and is caused by a network connectivity issue or brief service outage.

Check Harness Site Status and AWS Status (search for AWS Key Management Service).

You are not authorized to perform this operation: AmazonEC2: Status code 403

This error occurs when you are testing a Harness AWS Connector and the credentials used for the connection do not include a policy with the DescribeRegions action.

The DescribeRegions action is required for all AWS Connectors. Harness tests the connection using an API call for the DescribeRegions action.

This is described in Add an AWS Connector.

Ensure that one of the IAM roles assigned to the user account used for AWS Connector credentials contains the DescribeRegions action.

Naming Conventions

Typically, names for Harness entities can only contain alphanumerics, _ and -.

Some naming conventions in repositories and other artifact sources, or in target infrastructures, cannot be used by Harness. For example, if a Harness Trigger Webhook uses a Push notification from a Git repo branch that contains a dot in its name, the Trigger is unlikely to work.

Character support in Harness Environment and Infrastructure Definition entity names is restricted to alphanumeric characters, underlines, and hyphens. The restriction is due to compatibility issues with Harness backend components, database keys, and the YAML flow where Harness creates files with entity names on file systems.


The following issues can occur when using Harness secrets.

Secrets Values Hidden In Log Output

If a secret's unencrypted value shares some content with the value of another Harness variable, Harness will hide the secret's value in any logs. Harness replaces the secret's conflicting value with the secret's name in any log displays. This is for security only, and the actual value of the secrets and variables are still substituted correctly.


The Harness Delegate runs in your target deployment environment and needs access to the default Harness AWS KMS for secrets management. If it does not have access, the following error can occur:

Service: AWSKMS; Status Code: 403

Ensure that the Delegate can reach the Harness KMS URL by logging into the Delegate host(s) and entering the following cURL command:


Next, ensure that your proxies are not blocking the URL or port 443.

If this does not fix your error, and you are not using the default Harness KMS secret store, the AWS KMS access key provided in Harness for your own KMS store is likely invalid.


This section covers error messages you might see when creating, updating, deleting, or executing a Trigger. It includes authorization/permissions steps to resolve the errors.

zsh: no matches found

If you are using MacOS Catalina the default shell is zsh. The zsh shell requires that you escape the ? character in your cURL command or put quotes around the URL.

For example, this will fail:

curl -X POST -H 'content-type: application/json' --url -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'

This will work:

curl -X POST -H 'content-type: application/json' --url " -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'"

User does not have "Deployment: execute" permission

Error messages of the form User does not have "Deployment: execute" permission indicate that your user group's Role settings do not include Pipeline: Execute.

To resolve this, see Add and Manage Roles.

Continuous Delivery

The following issues can occur when running Pipeline deployments.

Deployment Rate Limits

If you've reached 85% of the limit, you will see:

85% of deployment rate limit reached. Some deployments may not be allowed. Please contact Harness support.

If you've reached 100% of the limit, you might see:

Deployment rate limit reached. Some deployments may not be allowed. Please contact Harness support.

Harness applies an hourly and daily deployment limit to each account to prevent configuration errors or external triggers from initiating too many undesired deployments. If you are notified that you have reached a limit, it is possible that undesired deployments are occurring. Please determine if a Trigger or other mechanism is initiating undesired deployments. If you continue to experience issues, contact Harness Support.

The daily limit is 100 deployments every 24 hours. The hourly limit is 40 deployments and is designed to detect any atypical upsurge of deployments.

Error in Log When There is No Error

When Harness captures commands output, it captures both standard out (stdout) and standard error (stderr) to the screen. Information from stdout receives the prefix INFO while information from stderr receives the prefix ERROR. This is meant to allow our users to know where the information they see comes from.

Unfortunately, several Linux commands and applications use standard out as a way to print information to the screen that will not be captured if the output is captured to a file.

For example, the cURL command shows a download progress indication on the screen. If you redirect cURL output to a file the progress indicator is not captured in the file. This is done by showing the progress indicator in standard out. This is a very useful feature for many users, but for Harness is causes the progress indicator to be seen with the ERROR prefix.

You can test this for yourself with the following short example:

curl >out.txt 2>err.txt

cat out.txt

cat err.txt

As you can see, the err.txt file has the cURL command output that in Harness will show with the ERROR prefix.

If Harness does not show standard error, then many errors will not be captured, confusing customers. Therefore, Harness shows the standard error in its logs.

Continuous Integration

The following issues can occur when using CI into Harness.

Test suites wrongly parsed

The parsed Test report in the Test tab comes strictly from the JUnit reports provided. It is important to adhere to the standard format to improve test suite parsing.

Refer to the standard JUnit format.


The following troubleshooting information should help you diagnose common Helm problems.

Unable to get an Update from the Chart Repository

If Harness cannot get an update from a chart repo you have set up for your Helm Service, during deployment you might see the error:

Unable to get an update from the "XYZ" chart repository ... read: connection reset by peer

To fix this, find the Delegate that the Helm update ran on, and then SSH to the Delegate host and run the Helm commands manually. This will confirm if you are having an issue with your Harness setup or a general connectivity issue.


The following problems can occur when developing and deploying to Kubernetes.

The Deployment is invalid...may not be specified when `value` is not empty

Every Harness deployment creates a new release with an incrementally increasing number. Release history is stored in the Kubernetes cluster in a ConfigMap. This ConfigMap is essential for release tracking, versioning and rollback.

See Kubernetes Releases and Versioning.

If the ConfigMap is edited using kubectl or another tool between deployments future deployments often fail.

This type of error is experienced in standard Kubernetes deployments when attempting to use kubectl apply on a manifest whose resources have been previously modified using kubectl edit. For example, see the comments in this Kubernetes issue.

NullPointerException: Release Name is Reserved for Internal Harness ConfigMap

The release name you enter in the Infrastructure Definition Release name is reserved for the internal Harness ConfigMap used for tracking the deployment.

Do not create a ConfigMap that uses the same name as the release name. Your ConfigMap will override the Harness internal ConfigMap and cause a NullPointerException.

See Define Your Kubernetes Target Infrastructure.

The server doesn't have a resource type "deployments"

When you attempt to connect to the Kubernetes cluster via GCP, the Kubernetes cluster must have Basic authentication enabled or the connection will fail. For more information, see Control plane security from GCP. From GCP:

You can handle cluster authentication in Google Kubernetes Engine by using Cloud IAM as the identity provider. However, legacy username-and-password-based authentication is enabled by default in Google Kubernetes Engine. For enhanced authentication security, you should ensure that you have disabled Basic Authentication by setting an empty username and password for the MasterAuth configuration. In the same configuration, you can also disable the client certificate which ensures that you have one less key to think about when locking down access to your cluster.

  • If Basic authentication is inadequate for your security requirements, use the Kubernetes Cluster Connector.
  • While it can be easier to use the Kubernetes Cluster Connector for Kubernetes cluster deployments, to use a Kubernetes cluster on Google GKE, Harness requires a combination of Basic Authentication and/or Client Certificate to be enabled on the cluster:

This is required because some API classes, such as the MasterAuth class, require HTTP basic authentication or client certificates.

Invalid Value LabelSelector

If you are deploying different Harness Pipelines to the same cluster during testing or experimentation, you might encounter a Selector error such as this:

The Deployment “harness-example-deployment” is invalid: spec.selector: 
Invalid value: v1.LabelSelector{MatchLabels:map[string]string{“app”:“harness-example”},
MatchExpressions:[]v1.LabelSelectorRequirement{}}: field is immutable

This error means that, in the cluster, there is a Deployment with same name which uses a different pod selector.

Delete or rename the Deployment. Let's look at deleting the Deployment. First, get a list of the Deployments:

kubectl get all

service/kubernetes ClusterIP <none> 443/TCP 18d

deployment.apps/harness-example-deployment 1 1 1 1 4d

And then delete the Deployment:

kubectl delete deploy/harness-example-deployment svc/kubernetes

deployment.extensions "harness-example-deployment" deleted

service "kubernetes" deleted

Rerun the Harness deployment and the error should not occur.

Cannot Create Property

The following error message can appear if a property, such as the security settings (securityContext) in the pod or container, are located in the wrong place in the specification:

ConstructorException: Cannot create property=spec for JavaBean=class V1StatefulSet

Ensure that your YAML specification is formed correctly. There are online validation tools such as

For steps on how to add a security context for a pod or container, see Configure a Security Context for a Pod or Container from Kubernetes.

Here is an example:

apiVersion: v1
kind: Pod
name: security-context-demo-2
runAsUser: 1000
- name: sec-ctx-demo-2
runAsUser: 2000
allowPrivilegeEscalation: false


The following are resolutions to common configuration problems when Terraform Provisioning with Harness.

Provisioned Resources Already Exist (Terraform State File Locked)

When a Terraform Apply step fails because of a timeout, subsequent deployments might see following error message:

Error creating [object]. The [object] already exists.

Use a longer timeout for the Terraform Apply step.

When the Terraform Apply times out, Terraform locks the Terraform state file. A Terraform Force Unlock needs to be performed.

Locking and unlocking of tfstate files is handled by Terraform automatically. You can disable state locking for most commands with the -lock flag but it is not recommended. See State Locking from Terraform.

After timeout, no resources may be added to the state file. A manual cleanup of any resources created must be performed as well.

TerraformValidation - Terraform validation result: false

Harness performs the following validation when you use Terraform in a deployment:

  1. Is Terraform installed on the Harness Delegate? Harness installs it automatically, but it might have been removed.
  2. Can the Harness Delegate connect to the Git repo?

If the Harness Delegate does not have Terraform installed, you will see a log entry such as the following:

2020-04-21 19:26:19,134 INFO software.wings.delegatetasks.validation.TerraformValidation - Running terraform validation for task

2020-04-21 19:26:19,157 INFO software.wings.delegatetasks.validation.TerraformValidation - Terraform validation result: false

The message Terraform validation result: false means Terraform is not installed on the Delegate.

Install Terraform on the Delegate to fix this.

Harness Secret Managers

If the Harness Delegate(s) cannot authenticate with a Secret Manager, you might see an error message such as this:

Was not able to login Vault using the AppRole auth method. 
Please check your credentials and try again

For most authentication issues, try to connect to the Harness Secrets Manager from the host running your Harness Delegate(s). This is done simply by using a cURL command and the same login credentials you provided when you set up the Harness Secret Manager.

For example, here is a cURL command for HashiCorp Vault:

curl -X POST -d '{"role_id":"<APPROLE_ID>", "secret_id":"<SECRET_ID>"}' https://<HOST>:<PORT>/v1/auth/approle/login

If the Delegate fails to connect, it is likely because of the credentials or a networking issue.


The following errors might occur during the set up or use of SAML SSO.

Signed in user is not assigned to a role for the Project (Harness)

A user registered in the Harness project in Azure portal is not able to access the application and gets this error.


If the email address used in Harness is different from the email address in Azure app, you will get an error saying that the user is not assigned to a role for the Harness application.


Make sure the email address used in Harness matches with the email address in Azure app.

For more information about SAML SSO configuration with Azure, see Single Sign-On (SSO) with SAML.

Shell Scripts

This section covers common problems experienced when using the Shell Script step.

FileNotFoundExeption inside shell script execution task

This error happens when you are publishing output and your Shell Script step exits early from its script.

If you exit from the script (exit 0), values for the context cannot be read.

If you publish output variables in your Shell Script step, structure your script with if...else blocks to ensure it always runs to the end of the script.

Please Provide Feedback