Upload Artifacts to S3 Step Settings
This topic provides settings for the Upload Artifacts to S3 step.
Use this step to upload artifacts to AWS S3 or other providers that supports the S3 protocol, such as MinIo.
In this topic:
- AWS Connector
- Source Path
- Endpoint URL
- Run as User
- Set container resources
- See Also
The unique name for this step.
The Harness Connector to use when connecting to AWS S3.
The AWS IAM roles and policies associated with the account used in the Harness AWS Connector must be able to push to S3.
The bucket name for the uploaded artifact.
Path to the artifact file/folder you want to upload.
You can use regex to upload multiple files.
Harness will automatically create the compressed file.
Endpoint URL for S3-compatible providers (not needed for AWS).
The bucket path where the artifact will be stored.
Do not include the bucket name. It is specified in Bucket.
If no target is provided, the cache is saved to
Run as User
Set the value to specify the user id for all processes in the pod, running in containers. See Set the security context for a pod.
Set container resources
Maximum resources limit values for the resources used by the container at runtime.
Maximum memory that the container can use.
You can express memory as a plain integer or as a fixed-point number using one of these suffixes: G, M.
You can also use the power-of-two equivalents: Gi, Mi.
Limit the number of cores that the container can use.
Limits for CPU resources are measured in cpu units.
Fractional requests are allowed. The expression 0.1 is equivalent to the expression 100m, that can be read as one hundred millicpu.
Timeout for the step. Once the timeout is reached, the step fails, and the Pipeline execution continues.