Virtual Machine On-Prem: Installation Guide

Updated 1 month ago by Michael Cretzman

This topic covers installing Harness Virtual Machine NextGen On-Prem as a Kubernetes cluster embedded on your target VMs.

To install Harness Virtual Machine NextGen On-Prem, first you install Harness Virtual Machine FirstGen On-Prem and then you install NextGen as an application.

Installing Harness On-Prem into an embedded Kubernetes cluster is a simple process where you prepare your VMs and network, and use the Kubernetes installer kURL and the KOTS plugin to complete the installation and deploy Harness.

Once you have set up Harness on a VM, you can add additional worker nodes by simply running a command.

Harness On-Prem uses the open source Kubernetes installer kURL and the KOTS plugin for installation. See Install with kURL from kURL and Installing an Embedded Cluster from KOTS.

In this topic:

NextGen On-Prem Installation Options

How you install NextGen On-Prem will follow one of the use cases below:

NextGen On-Prem on Existing FirstGen On-Prem VMs

In this scenario, you have an existing Harness FirstGen On-Prem running and you want to add Harness NextGen to it.

You simply add Harness NextGen On-Prem as a new application in your existing FirstGen On-Prem installation.

  1. Open the FirstGen On-Prem KOTS admin tool.
  2. Install NextGen On-Prem as a new application on existing FirstGen On-Prem.
  3. Upload the NextGen On-Prem license file.
  4. Use the exact same FirstGen On-Prem configuration values for the NextGen On-Prem configuration.

If you are using this option, skip to Install NextGen On-Prem on Existing FirstGen On-Prem.

NextGen On-Prem on New FirstGen On-Prem VMs

In this scenario, you want to install FirstGen On-Prem and NextGen On-Prem on new VMs.

  1. Set up your VMs according to the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.
  2. Install FirstGen On-Prem.
  3. Install NextGen On-Prem as a new application on existing FirstGen On-Prem.
  4. Upload the NextGen On-Prem license file.
  5. Use the exact same FirstGen On-Prem configuration values for the NextGen On-Prem configuration.

If you are using this option, do the following:

  1. Follow all of the FirstGen On-Prem installation instructions beginning with Step 1: Set up VM Requirements.
  2. Follow the NextGen On-Prem installation instructions in Install NextGen On-Prem on Existing FirstGen On-Prem.

Legacy FirstGen On-Prem not Using KOTS

In this scenario, you have a legacy FirstGen On-Prem installation that is not a KOTS-based installation.

This process will involve migrating your legacy FirstGen On-Prem data to a new KOTS-based FirstGen On-Prem and then installing NextGen On-Prem.

  1. Set up your VMs according to the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.
  2. Install FirstGen On-Prem.
  3. Migrate data to new FirstGen On-Prem using a script from Harness Support.
  4. Install NextGen On-Prem as a new application on the new FirstGen On-Prem.
  5. Upload the NextGen On-Prem license file.
  6. Use the exact same FirstGen On-Prem configuration values for the NextGen On-Prem configuration.

If you are using this option, do the following:

  1. Follow all of the FirstGen On-Prem installation instructions beginning with Step 1: Set up VM Requirements.
  2. Migrate data to new FirstGen On-Prem using a script from Harness Support.
  3. Follow the NextGen On-Prem installation instructions in Install NextGen On-Prem on Existing FirstGen On-Prem.

Step 1: Set up VM Requirements

Ensure that your VMs meet the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.

Different cloud platforms use different methods for grouping VMs (GCP instance groups, AWS target groups, etc). Set up your 3 VMs using the platform method that works best with the platform's networking processes.

Step 2: Set Up Load Balancer and Networking Requirements

Ensure that your networking meets the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.

You will need to have two load balancers, as described in the Virtual Machine On-Prem: Infrastructure Requirements.

One for routing traffic to the VMs and one for the in-cluster load balancer.

During installation, you are asked for the IP address of the in-cluster TCP load balancer first.

When you configure the Harness On-Prem application in the KOTS admin console, you are asked for the HTTP load balancer URL.

Option 1: Disconnected Installation

Disconnected Installation involves downloading the Harness On-Prem archive file onto a jump box, and then copying and the file to each On-Prem host VM you want to use.

One each VM, you extract and install Harness On-Prem.

On your jump box, run the following command to obtain the On-Prem file:

curl -LO https://kurl.sh/bundle/harness.tar.gz

Copy the file to a Harness On-Prem host and extract it (tar xvf harness.tar.gz).

On the VM, install Harness:

cat install.sh | sudo bash -s airgap ha

This will install the entire On-Prem Kubernetes cluster and all related microservices.

The ha parameter is used to set up high availability. If you are not using high availability, you can omit the parameter.

Provide Load Balancer Settings

First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA:

The installer will use network interface 'ens4' (with IP address '10.128.0.25')
Please enter a load balancer address to route external and internal traffic to the API servers.
In the absence of a load balancer address, all traffic will be routed to the first master.
Load balancer address:

This is the TCP load balancer you created in Virtual Machine On-Prem: Infrastructure Requirements.

For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443:

Enter the IP address and port of your TCP load balancer (for example, 10.128.0.50:6443), and press Enter. The installation process will continue. The installation process begins like this:

...
Fetching weave-2.5.2.tar.gz
Fetching rook-1.0.4.tar.gz
Fetching contour-1.0.1.tar.gz
Fetching registry-2.7.1.tar.gz
Fetching prometheus-0.33.0.tar.gz
Fetching kotsadm-1.16.0.tar.gz
Fetching velero-1.2.0.tar.gz
Found pod network: 10.32.0.0/22
Found service network: 10.96.0.0/22
...

Review Configuration Settings

Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands.

  • KOTS admin console and password:
Kotsadm: http://00.000.000.000:8800
Login with password (will not be shown again): D1rgBIu21
If you need to reset your password, enter kubectl kots reset-password -n default. You will be prompted for a new password.
  • Prometheus, Grafana, and Alertmanager ports and passwords:
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .
  • kubectl access to your cluster:
To access the cluster with kubectl, reload your shell:
bash -l
  • The command to add worker nodes to the installation:
To add worker nodes to this installation, run the following script on your other nodes:

curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

We will use this command later.

  • Add master nodes:
To add MASTER nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to
ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7
2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control-
plane docker-registry-ip=10.96.2.100

Log into the Admin Tool

In a browser, enter the Kotsadm link.

The browser displays a TLS warning.

Click Continue to Setup.

In the warning page, click Advanced, then click Proceed to continue to the admin console.

As KOTS uses a self-signed certification, but you can upload your own.

Upload your certificate or click Skip and continue.

Log into the console using the password provided in the installation output.

Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Now that license file is uploaded, you can install Harness.

Go to Step 3: Configure Harness.

Option 2: Connected Installation

Once you have your VMs and networking requirements set up, you can install Harness.

Log into one of your VMs, and then run the following command:

curl -sSL https://k8s.kurl.sh/harness | sudo bash -s ha

This will install the entire On-Prem Kubernetes cluster and all related microservices.

The -s ha parameter is used to set up high availability.

Provide Load Balancer Settings

First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA:

The installer will use network interface 'ens4' (with IP address '10.128.0.25')
Please enter a load balancer address to route external and internal traffic to the API servers.
In the absence of a load balancer address, all traffic will be routed to the first master.
Load balancer address:

This is the TCP load balancer you created in Virtual Machine On-Prem: Infrastructure Requirements.

For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443:

Enter the IP address and port of your TCP load balancer (for example, 10.128.0.50:6443), and press Enter. The installation process will continue. The installation process begins like this:

...
Fetching weave-2.5.2.tar.gz
Fetching rook-1.0.4.tar.gz
Fetching contour-1.0.1.tar.gz
Fetching registry-2.7.1.tar.gz
Fetching prometheus-0.33.0.tar.gz
Fetching kotsadm-1.16.0.tar.gz
Fetching velero-1.2.0.tar.gz
Found pod network: 10.32.0.0/22
Found service network: 10.96.0.0/22
...

Review Configuration Settings

Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands.

  • KOTS admin console and password:
Kotsadm: http://00.000.000.000:8800
Login with password (will not be shown again): D1rgBIu21
If you need to reset your password, enter kubectl kots reset-password -n default. You will be prompted for a new password.
  • Prometheus, Grafana, and Alertmanager ports and passwords:
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .
  • kubectl access to your cluster:
To access the cluster with kubectl, reload your shell:
bash -l
  • The command to add worker nodes to the installation:
To add worker nodes to this installation, run the following script on your other nodes:

curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

We will use this command later.

  • Add master nodes:
To add MASTER nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to
ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7
2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control-
plane docker-registry-ip=10.96.2.100

Log into the Admin Tool

In a browser, enter the Kotsadm link.

The browser displays a TLS warning.

Click Continue to Setup.

In the warning page, click Advanced, then click Proceed to continue to the admin console.

As KOTS uses a self-signed certification, but you can upload your own.

Upload your certificate or click Skip and continue.

Log into the console using the password provided in the installation output.

Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Now that license file is uploaded, you can install Harness.

Download Harness over the Internet

If you are installing Harness over the Internet, click the download Harness from the Internet link.

KOTS begins installing Harness into your cluster.

Next, you will provide KOTS with the Harness configuration information (Load Balancer URL and NodePort).

Step 3: Configure Harness

Now that you have added your license you can configure the networking for the Harness installation.

Mode

  • Select Demo to run a On-Prem in demo mode and experiment with it.
  • Select Production - Single Node to run this on one node. You can convert to Production - High Availability later.
  • Select Production - High Availability to run a production version of On-Prem.

In you use Production - Single Node, you can convert to Production - High Availability later by doing the following:

  1. In the KOTS admin console, go to Cluster Management.
  2. Click Add a node. This will generate scripts for joining additional worker and master nodes.
For Disconnected (Airgap) installations, the bundle must also be downloaded and extracted on the remote node prior to running the join script.

NodePort and Application URL

Virtual Machine On-Prem requires that you provide a NodePort and Application URL.

  1. In Application URL, enter the full URL for the HTTP load balancer you set up for routing external traffic to your VMs.
    Include the scheme and hostname/IP. For example, https://app.example.com.
    Typically, this is the frontend IP address for the load balancer. For example, here is an HTTP load balancer in GCP and how you enter its information into Harness Configuration.
    If you have set up DNS to resolve a domain name to the load balancer IP, enter that domain name in Application URL.
  2. In NodePort, enter the port number you set up for load balancer backend: 80.
  3. When you are done, click Continue.

Option: Advanced Configurations

In the Advanced Configurations section, there are a number of advanced settings you can configure. If this is the first time you are setting up Harness On-Prem, there's no reason to fine tune the installation with these settings.

You can change the settings later in the KOTS admin console's Config tab:

Ingress Service Type

By default, nginx is used for Ingress automatically. If you are deploy nginx separately, do the following:

  1. Click Advanced Configurations.
  2. Disable the Install Nginx Ingress Controller option.
gRPC and Load Balancer Settings

In Scheme, if you select HTTPS, the GRPC settings appear.

If your load balancer does support HTTP2 over port 443, enter the following:

  • GRPC Target: enter the load balancer hostname (hostname from the load balancer URL)
  • GRPC Authority: enter manager-grpc-<hostname>. For example: manager-grpc-35.202.197.230.

If your load balancer does not support HTTP2 over port 443 you have two options:

  • If your load balancer supports multiple ports for SSL then add port 9879 in the application load balancer and target port 9879 or node port 32510 on the Ingress controller.
    • GRPC Target: enter the load balancer hostname
    • GRPC Authority: enter the load balancer hostname
  • If your load balancer does not support multiple ports for SSL then create a new load balancer and target port 9879 or node port 32510 on the Ingress controller:
    • GRPC Target: enter the new load balancer hostname
    • GRPC Authority: enter the new load balancer hostname
Log Service Backend

There are two options for Log Service Backend:

Minio: If you want to use the builtin Minio log service then your load balancer needs to reach the Ingress controller on port 9000. Create a new load balancer and target port 9000 or node port 32507.

Amazon S3 Bucket: Enter the S3 bucket settings to use.

Step 4: Perform Preflight Checks

Preflight checks run automatically and verify that your setup meets the minimum requirements.

You can skip these checks, but we recommend you let them run.

Fix any issues in the preflight steps.

Step 5: Deploy Harness

When you are finished pre-flight checks, click Deploy and Continue.

Harness is deployed in a few minutes.

It can take up to 30 minutes when installing the demo version on a system with the minimum recommended specs.

In a new browser tab, go to the following URL, replacing <LB_URL> with the URL you entered in the Application URL setting in the KOTS admin console:

<LB_URL>/#/onprem-signup

For example:

http://harness.mycompany.com/#/onprem-signup

The Harness sign up page appears.

Sign up with a new account and then log in. Your new account will be added to the Harness Account Administrators User Group.

See Add and Manage User Groups.

Future Versions

To set up future versions of On-Prem, in the KOTS admin console, in the Version history tab, click Deploy. The new version is displayed in Deployed version.

Step 6: Add Worker Nodes

Now that Harness On-Prem is installed in one VM, you can install it on other VMs using the command provided when you installed Harness:

To add worker nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

Run this on each VM in your group. The installation will begins something like this:

...
Docker already exists on this machine so no docker install will be performed
Container already exists on this machine so no container install will be performed
The installer will use network interface 'ens4' (with IP address '10.128.0.44')
Loaded image: replicated/kurl-util:v2020.07.15-0
Loaded image: weaveworks/weave-kube:2.5.2
Loaded image: weaveworks/weave-npc:2.5.2
Loaded image: weaveworks/weaveexec:2.5.2
...

When installation is complete, you will see the worker join the cluster and preflight checks are performed:

⚙  Join Kubernetes node
+ kubeadm join --config /opt/replicated/kubeadm.conf --ignore-preflight-errors=all
[preflight] Running pre-flight checks
validated versions: 19.03.4. Latest
validated version: 18.09

The worker is now joined.

Important Next Steps

Important: You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured.
  1. Install the Harness Delegate.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager.
    Ensure you open the correct port for your SMTP provider, such as Office 365.
  3. Add a Secrets Manager. By default, On-Prem installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.After On-Prem installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection.

Updating Harness

Do not upgrade Harness past 4 major releases. Instead, upgrades each interim release until you upgrade to the latest release. A best practice is to upgrade Harness once a month.

Please follow these steps to update your Harness On-Prem installation.

The steps are very similar to how you installed Harness initially.

For more information, see Updating an Embedded Cluster from KOTS.

Disconnected (Airgap)

The following steps require a private registry, just like the initial installation of Harness.

Upgrade Harness
  1. Download the latest release from Harness.
  2. Run the following command on the VM(s) hosting Harness, replacing the placeholders:
kubectl kots upstream upgrade harness \ 
--airgap-bundle <path to harness-<version>.airgap> \
--kotsadm-namespace harness-kots \
-n default
Upgrade Embedded Kubernetes Cluster and KOTS
  1. Download the latest version of Harness:
curl -SL -o harnesskurl.tar.gz https://kurl.sh/bundle/harness.tar.gz
  1. Move the tar.gz file to the disconnected VMs.
  2. On each VM, run the following command to update Harness:
tar xzvf harnesskurl.tar.gz
cat install.sh | sudo bash -s airgap

Connected

The following steps require a secure connection to the Internet, just like the initial installation of Harness.

Upgrade Harness
  1. Run the following command on the VMs hosting Harness:
kubectl kots upstream upgrade harness -n harness
Upgrade Embedded Kubernetes Cluster and KOTS
  1. Run the following command on the VMs hosting Harness:
curl -sSL https://kurl.sh/harness | sudo bash

Monitoring Harness

Harness monitoring is performed using the built in monitoring tools.

When you installed Harness, your were provided with connection information for Prometheus, Grafana, and Alertmanager ports and passwords:

The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .

For steps on using the monitoring tools, see Prometheus from KOTS.

License Expired

If your license has expired, you will see something like the following:

Contact your Harness Customer Success representative or support@harness.io.

Install NextGen On-Prem on Existing FirstGen On-Prem

This section assumes you have a FirstGen On-Prem installation set up and running following the step earlier in this guide (beginning with Step 1: Set up VM Requirements).

Now you can add NextGen On-Prem as a new application to your FirstGen On-Prem installation.

  1. Log into your FirstGen On-Prem KOTS admin tool.
  2. Click Config.
  3. Record all of the FirstGen On-Prem settings. You will need to use these exact same settings when setting up NextGen On-Prem.
    If you want to change settings, change them and then record them so you can use them during the NextGen On-Prem installation.
  4. Click Add a new application.
  5. Add the NextGen On-Prem license file you received from Harness Support, and then click Upload license.
  6. Depending on whether your FirstGen On-Prem installation is Disconnected or Connected, follow the installation steps described here:When you are done, you'll be on the Configure HarnessNG page. This is the standard configuration page you followed when you set up FirstGen On-Prem in Step 3: Configure Harness.
  7. Enter the exact same configuration options as your FirstGen On-Prem installation.
    Please ensure you include your Advanced Configuration settings.
    Ensure you use the exact same Scheme you used in FirstGen On-Prem (HTTP or HTTPS).
    The Load Balancer IP Address setting does not appear because NextGen On-Prem is simply a new application added onto FirstGen On-Prem. NextGen On-Prem will use the exact same Load Balancer IP Address setting by default.
  8. Click Continue at the bottom of the page.
    Harness will perform pre-flight checks.
  9. Click Continue.
    Harness is deployed in a few minutes.
    When Harness NextGen On-Prem is ready, you will see it listed as Ready:
  10. In a new browser tab, go to the following URL, replacing <LB_URL> with the URL you entered in the Application URL setting in the KOTS admin console:

<LB_URL>/#/onprem-signup

For example:

http://harness.mycompany.com/#/onprem-signup

The Harness sign up page appears.

Sign up with a new account and then log in. Your new account will be added to the Harness Account Administrators User Group.

When you log in, you will see Harness NextGen On-Prem.

If you are familiar with Harness, you can skip Learn Harness' Key Concepts.

Try the NextGen Quickstarts.

Notes

Harness On-Prem installations do not currently support the Harness Helm Delegate.


Please Provide Feedback