Define an AWS VM Build Infrastructure

Updated 3 weeks ago by Manish Jaiswal

Currently, this feature is behind the Feature Flag CI_VM_INFRASTRUCTURE. Contact Harness Support to enable the feature.

The codebase and tests you add to a Harness CI Stage are built and run using a build infrastructure (build farm) in your environment.

Some CI providers use in-house orchestration systems for build farms like Docker Machine (deprecated since 2019). With these systems, outage and backlogs can occur in their infrastructure. Often, backlogs occur because a provider doesn't have enough capacity to process the build requests that accumulated during the outage.

Harness build farms run on your infrastructure using battle-tested platforms for large container workloads (Kubernetes, AWS VMs). This enables you to build software and run tests, repeatedly and automatically, on a scalable platform with no outages.

This topic describes how to set up and use AWS Linux and Windows VMs as build infrastructures for running builds and tests in a CI Stage. Once set up, the AWS VMs used by your Harness Pipelines build your software and run your tests safely and at scale.

For information on using Kubernetes as a build farm, see Define Kubernetes Cluster Build Infrastructure.

Before You Begin

This topic assumes you're familiar with the following:

Review: Set Up the Build Infrastructure using Terraform

This topic walks through setting up an AWS build infrastructure using Harness Manager and the AWS console.

You can also use Terraform to set up an AWS build infrastructure.

For steps on using Terraform, see the Harness GitHub repo cie-vm-delegate.

Review: Prerequisites

  • AWS EC2 configuration:
    • EC2 instance for the Delegate and Runner. A t2.medium is enough.
    • While the Delegate and Runner must run on the same EC2 instance, the build VM can be Linux or Windows. Any Linux instance will work. Windows 2019 is the minimum OS version.
    • Set up an access key and access secret (AWS secret) for configuration of the Runner.
    • Set up VPC firewall rules for the build instances on EC2.
      • For information on creating a Security Group, see Authorize inbound traffic for your Linux instances in the AWS docs.
      • You also need to allow ingress access to ports 22 and 9079. Once completed, you'll have a Security Group ID, which is needed for the configuration of the Runner.
    • Optional: Enable RDP port 3389 for Windows VM for RDP access.

To set up your AWS VM infrastructure you need to install:

  • ​Harness Docker Delegate: a worker process that runs on your AWS instance. This process receives the CI Pipeline steps from the Harness Manager and dispatches the instructions to the Runner.
  • AWS Runner: a process that runs on the same instance as the Delegate. The Runner communicates with AWS and provisions VMs for builds. For each Harness Stage, it creates a new VM and executes the Steps on the VM. Then it cleans up the VM when all Steps in the Stage finish running.

Step 1: Prepare the VM Instance for Harness Docker Delegate and AWS Runner Installation

The Delegate and Runner must run on the same EC2 instance.

The following sections describe how to set up the instance and configure the Runner on it. Later steps cover adding the Harness Delegate to the instance.

Launch Your VM Instance

Log into the EC2 Console and launch the VM instance where the Harness Delegate will be installed.

Notes:

Configure AWS Runner on the VM Instance

The AWS Runner is packaged as a minimal Docker image distributed on DockerHub. It's responsible for authenticating and running commands on the AWS VM. 

The AWS Runner requires two input files: 

  • .env: contains information on how to connect to the AWS instance.
  • .drone_pool.yml: used to define the VM spec and pool size for the VM instances used to run the Pipeline.

Configure the Runner on the AWS VM 

Create a directory on your VM to store the Runner configuration file.

To create a directory, run the following command:

mkdir /runner
cd runner

A directory named runner is created on your VM.

Configure the following fields in the .env file to allow Runner to access and launch your AWS VM.

Fields

Description

Value

DRONE_SETTINGS_AWS_ACCESS_KEY_ID

Enter your AWS Access Key ID.

DRONE_SETTINGS_AWS_ACCESS_KEY_SECRET

Enter your AWS access key secret.

DRONE_SETTINGS_AWS_REGION

Enter your AWS region.

DRONE_SETTINGS_REUSE_POOL

Reuse existing EC2 instances on restart of the Runner.

false

DRONE_SETTINGS_LITE_ENGINE_PATH

This variable contains the release information for the Lite Engine. The Lite Engine is a binary that is injected into the VMs with which the Runner interacts. It is responsible for coordinating the execution of the steps.

https://github.com/harness/lite-engine/releases/download/v0.0.1.12

DRONE_TRACE

Optional boolean value. Enables trace-level logging.

true

DRONE_DEBUG

Optional boolean value. Enables debug-level logging.

true

DRONE_SETTINGS_KEY_PAIR_NAME

Enter the name of the key pair.

The Runner allows you to specify a key pair to use in case you want to connect to your Windows VM via RDP.

This is highly recommended for troubleshooting.

SSH is installed via a cloud-init script, so in case something goes wrong, connecting via RDP will always be possible.

For details, see AWS EC2 Key pairs

Here’s a sample .env file:

DRONE_SETTINGS_AWS_ACCESS_KEY_ID="<access_key>"
DRONE_SETTINGS_AWS_ACCESS_KEY_SECRET="<secret_key>"
DRONE_SETTINGS_AWS_REGION="us-east-2"
DRONE_SETTINGS_REUSE_POOL=false
DRONE_SETTINGS_LITE_ENGINE_PATH=https://github.com/harness/lite-engine/releases/download/v0.0.1.12
DRONE_DEBUG=true
DRONE_TRACE=true
DRONE_SETTINGS_KEY_PAIR_NAME="<name_of_key_pair>"

Configure the Drone Pool on the AWS VM

The .drone_pool.yml file defines the VM spec and pool size for the VM instances used to run the Pipeline.

A pool is a group of hot VM instances that are instantiated and ready to use so Harness Pipelines can immediately acquire VM instances rather than waiting for new instances to be provisioned.

Steps to configure .drone_pool.yml file: 

  • In the Runner folder, create a new .drone_pool.yml file.
  • Set up the file as described in the following table and example. See also Drone Pool.

Subfields

Examples

Description

name (String)

NA | name: windows_pool

Unique identifier of the pool. You will be referencing this pool name in the Harness Manager in later steps while setting up the CI Stage Infrastructure.

min_pool_size (Integer)

NA | min_pool_size: 1

Minimum pool size number. Denotes the minimum number of cached VMs in ready state to be used by the Runner.

max_pool_size (Integer)

NA | max_pool_size: 3

Maximum pool size number. Denotes the maximum number of cached VMs in ready state to be used by the Runner.

platform

os (String) |

 platform: os: windows

arch (String) |

platform: arch:  

variant (String) |

platform: variant: 

version (String) |

platform: version: 

Configure the details of your VM platform. By default, the platform is set to Linux OS and AMD64 architecture.

instance

ami (String) |

instance:
ami: ami-092f63f22143765a3

tags (String) |

instance:
tags: 285

type (String) |

instance:
type: t2.micro

disk 

  • size (Integer)
disk:
  size:
  • type (String)
disk:
  type:
  • iops (String)
disk:
  iops:

Network

  • vpc (Integer) |
network:
vpc:
  • vpc_security_groups ([ ] String) |
network:
vpc_security_groups:
- sg-0ad8xxxx511b0
  • security_groups ([ ] String) |
network:
security_groups:
- sg-06dcxxxx9811b0
  • Subnet_id (String) |
network:
subnet_id:
subnet-0ab15xxxx07b53
  • private_ip (boolean) |
network:
private_ip:

Configure the settings of your AWS instance.

Disk contains AWS block information.

Network contains AWS network information.

For more information on these attributes, refer to the AWS doc Create a security group.

Later in this workflow, you'll reference the pool identifier in the Harness Manager to map the pool with a Stage Infrastructure in a CI Pipeline. This is described later in this topic.

Here’s a sample .drone_pool.yml file. The following YAML contains specifications for two pools (Windows and Ubuntu):

name: windows_pool
min_pool_size: 1
max_pool_size: 3

platform:
os: windows


instance:
ami: ami-092f63f22143765a3
type: t2.medium
network:
security_groups:
- sg-06dc8xxx11b0
subnet_id: subnet-0ab1xxx5407b53
--

name: ubuntu_pool
min_pool_size: 3
max_pool_size: 3

account:
region: us-east-2

instance:
ami: ami-0051xxxf42285
type: t2.micro
network:
security_groups:
- sg-06dc83xxx811b0
subnet_id: subnet-0ab15xxx7b53

Step 2: Create a Docker Delegate in Harness Manager

The Delegate can be installed at the Harness account, Organization, or Project level.

After you click New Delegate on a Delegates page, or as part of setting up a Connector, the Delegates selection page appears.

Follow the steps in Install the Docker Delegate to complete the Delegate YAML download.

The docker-compose.yaml file is downloaded on your local machine.

Step 3: Configure the Docker Compose File

The Harness Delegate and Runner run on the same VM. The Runner communicates with the Harness Delegate on localhost and port 3000 of your VM. 

In this step, you need to manually append the Runner spec to the Delegate's Docker Compose file. 

In your VM, copy the Delegate docker-compose.yaml file that you downloaded in the previous step and paste it into the runner folder on the AWS VM. This folder should now have docker-compose.yaml along with your .env and .drone_pool.yml files.

Append the following Drone Runner spec to the Docker Compose file.

drone-runner-aws:
restart: unless-stopped
image: drone/drone-runner-aws:1.0.0-rc.2
volumes:
- ./runner:/runner
entrypoint: ["/bin/drone-runner-aws", "delegate"]
working_dir: /runner
ports:
- "3000:3000"

In the docker-compose.yaml file, add the following field under services: harness-ng-delegate: restart: unless-stopped:

network_mode: "host"

Your Docker Compose file now looks something like this:

version: "3.7"
services:
harness-ng-delegate:
restart: unless-stopped
network_mode: "host"
deploy:
resources:
limits:
cpus: "0.5"
memory: 2048M
image: harness/delegate:latest
environment:
- ACCOUNT_ID=XICOBxxxmVbWOx-cQ
- ACCOUNT_SECRET=5058c29exxxea2452bfffeb
- MANAGER_HOST_AND_PORT=https://qa.harness.io
- WATCHER_STORAGE_URL=https://app.harness.io/public/qa/premium/watchers
- WATCHER_CHECK_LOCATION=current.version
- REMOTE_WATCHER_URL_CDN=https://app.harness.io/public/shared/watchers/builds
- DELEGATE_STORAGE_URL=https://app.harness.io
- DELEGATE_CHECK_LOCATION=delegateqa.txt
- USE_CDN=true
- CDN_URL=https://app.harness.io
- DEPLOY_MODE=KUBERNETES
- DELEGATE_NAME=qwerty
- NEXT_GEN=true
- DELEGATE_DESCRIPTION=
- DELEGATE_TYPE=DOCKER
- DELEGATE_TAGS=
- DELEGATE_TASK_LIMIT=50
- DELEGATE_ORG_IDENTIFIER=
- DELEGATE_PROJECT_IDENTIFIER=
- PROXY_MANAGER=true
- VERSION_CHECK_DISABLED=false
- INIT_SCRIPT=echo "Docker delegate init script executed."
drone-runner-aws:
restart: unless-stopped
image: drone/drone-runner-aws:1.0.0-rc.2
volumes:
- .:/runner
entrypoint: ["/bin/drone-runner-aws", "delegate"]
working_dir: /runner
ports:
- "3000:3000"

Step 4: Install Harness Delegate and AWS Drone Runner on the VM Instance

Perform the following steps to install the Delegate and AWS Runner.

Log into your AWS VM using SSH.

Navigate to the Runner directory you created earlier by running the following command:

cd /runner

Confirm that the folder has all three setup files:


[[email protected] runner]$ ls -a
.  ..  docker-compose.yml  .drone_pool.yml  .env 

Install the Delegate and Runner by running the following command:

docker-compose -f docker-compose.yml up -d

Verify that both containers are running by running the following command (this can take a few minutes):

docker ps

Your output will look something like this:

Verify the logs of both the Harness Delegate and the Drone Runner by running the following command:

docker logs <container_id>

Next, verify that the Delegate registers with Harness and appears on the Delegates list. It can take 2-3 minutes for the Delegate to register.

You will see Connected next to the Delegate listing.

If there is a connectivity error, you will see Not Connected. If there's an error, make sure the Docker host can connect to https://app.harness.io.

The Delegate and Runner have now been successfully installed, registered, and connected.

For details on the environment variables of the Harness Docker Delegate, see Harness Docker Delegate Environment Variables.

Step 5: Use AWS VM in the Pipeline Build Infrastructure

In the Harness CI Stage, in Infrastructure, select AWS VMs.

In the Pool ID, enter the pool name <pool_id> that you added in Step 1.

Your AWS build infrastructure is now set up. You can now run your Build Stages on AWS VMs.

See Also


Please Provide Feedback