Upload Artifacts to S3 Step Settings

Updated 2 months ago by Michael Cretzman

This topic provides settings for the Upload Artifacts to S3 step.

Use this step to upload artifacts to AWS S3 or other providers that supports the S3 protocol, such as MinIo.

In this topic:


The unique name for this step.


See Entity Identifier Reference.

AWS Connector

The Harness Connector to use when connecting to AWS S3.

The AWS IAM roles and policies associated with the account used in the Harness AWS Connector must be able to push to S3.

See AWS Connector Settings Reference.


The bucket name for the uploaded artifact.

Source Path

Path to the artifact file/folder you want to upload.

You can use regex to upload multiple files.

Harness will automatically create the compressed file.

Endpoint URL

Endpoint URL for S3-compatible providers (not needed for AWS).


The bucket path where the artifact will be stored.

Do not include the bucket name. It is specified in Bucket.

If no target is provided, the cache is saved to [bucket]/[key].

Run as User

Set the value to specify the user id for all processes in the pod, running in containers. See Set the security context for a pod.

Set container resources

Maximum resources limit values for the resources used by the container at runtime.

Limit Memory

Maximum memory that the container can use.

You can express memory as a plain integer or as a fixed-point number using one of these suffixes: G, M.

You can also use the power-of-two equivalents: Gi, Mi.

Limit CPU

See Resource units in Kubernetes.

Limit the number of cores that the container can use.

Limits for CPU resources are measured in cpu units.

Fractional requests are allowed. The expression 0.1 is equivalent to the expression 100m, that can be read as one hundred millicpu.


Timeout for the step. Once the timeout is reached, the step fails, and the Pipeline execution continues.

See Also

Please Provide Feedback