Skip to main content

Installation - AWS EKS

Prerequisites

Before you begin, ensure that you have the following prerequisites in place:

  • AWS Elastic Kubernetes Service Cluster: You should have access to a running AWS Elastic Kubernetes Service. If you don't have one, you can create a AWS Elastic Kubernetes Service through the AWS Control Panel or by using the AWS CLI.

  • Helm: Helm is a package manager for Kubernetes, which simplifies the deployment and management of applications. You should have Helm installed on your AWS Elastic Kubernetes Service(AWS EKS). You can install Helm by following the official Helm installation guide.

  • kubectl: kubectl is a command-line tool used for managing your Kubernetes cluster. Ensure that kubectl is installed and properly configured to work with your AWS Elastic Kubernetes Service. You can install kubectl by following the official Kubernetes installation guide.

Chart installation

Add the repo and continue with your Kubernetes Platform's documentation:

helm repo add cloud-agnost https://cloud-agnost.github.io/charts
helm repo update
info

If you already have NGINX ingress running on your cluster, make sure to disable it's deployment with --set ingress-nginx.enabled=false flag

Before installation, if you need to update configuration parametes of the installation, you can configure the settings based on base values.yaml.

Then run the below command to install Agnost:

# Install the chart on Kubernetes. 
# If the ingress is not enabled and need to install it then set `ingress-nginx.enabled=true`
helm upgrade --install agnost cloud-agnost/base \
--set ingress-nginx.platform=EKS
tip

The above command installs Agnost on the default namespace (or on your current context) and operators namespace to install 3rd party software operators.

You can add --namespace <namespace-name> and --create-namespace options to install Agnost to a different namespace.

Check the pods status, make sure that mongodb, rabbitmq, and redis are running: It takes around 5 minutes (depending on your local resources and internet connection)

$> kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
engine-monitor-deployment-6d5569878f-nrg7q 1/1 Running 0 8m8s
engine-realtime-deployment-955f6c77b-2wx52 1/1 Running 0 8m8s
engine-scheduler-deployment-775879f956-fq4sc 1/1 Running 0 8m8s
engine-worker-deployment-76d94cd4c9-9hsjc 1/1 Running 0 8m8s
minio-594ff4f778-hvk4t 1/1 Running 0 8m8s
mongodb-0 2/2 Running 0 7m57s
platform-core-deployment-5f79d59868-9jrbm 1/1 Running 0 8m8s
platform-sync-deployment-7c8bf79df6-h2prc 1/1 Running 0 8m8s
platform-worker-deployment-868cb59558-rv86h 1/1 Running 0 8m8s
rabbitmq-server-0 1/1 Running 0 7m49s
redis-master-0 1/1 Running 0 8m8s
studio-deployment-7fdccfc77f-pxsfj 1/1 Running 0 8m8s

Then you can reach your app via the IP address of your ingress:

# get the IP address of the Ingress --> EXTERNAL-IP field
$> kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
agnost-ingress-nginx-controller LoadBalancer 10.245.185.76 192.168.49.2 80:30323/TCP,443:31819/TCP 7m1s

# or to get it via script:
kubectl get svc -n ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[].ip}'

Then open your browser and access to the IP address (http://192.168.49.2/studio for the given example above) to launch Agnost Studio.

Chart Customization

Here is the helm documentation in how to customize the installation.

In a nutshell, you have 2 options:

  1. Set the value on the command line, e.g.:
helm upgrade --install agnost cloud-agnost/base \
--set ingress-nginx.enabled=false
  1. Create a values file with the changes you want to have:
# my-values.yaml
ingress-nginx:
enabled: false

minio:
mode: distributed
replicas: 4

Then, provide it on the command line:

helm upgrade --install agnost cloud-agnost/base -f my-values.yaml

Here are the values you can configure:

KeyTypeDefaultDescription
ingress-nginx.enabledbooltrueinstall ingress-nginx
ingress-nginx.controller.service.externalTrafficPolicystring"Local"This needs to be local for knative ingress work properly
ingress-nginx.controller.autoscaling.enabledbooltrueEnable/Disable autoscaling for ingress-nginx
ingress-nginx.controller.autoscaling.minReplicasint1Minimum ingress-nginx replicas when autoscaling is enabled
ingress-nginx.controller.autoscaling.maxReplicasint10Maximum ingress-nginx replicas when autoscaling is enabled
ingress-nginx.controller.autoscaling.targetCPUUtilizationPercentageint80Target CPU Utilization for ingress-nginx replicas when autoscaling is enabled
ingress-nginx.controller.autoscaling.targetMemoryUtilizationPercentageint80Target Memory Utilization for ingress-nginx replicas when autoscaling is enabled
ingress-nginx.controller.resourcesobject{"requests":{"cpu":"100m","memory":"200Mi"}}resources for the ingress-nginx controller
ingress-nginx.platformstring""Platform running the ingress, annotations needed for Elastic Kubernetes Service (AWS), Azure Kubernetes Service and Digital Ocean Kubernetes Possible values: [ AKS, DOKS, EKS ]
cert-manager.namespacestring"cert-manager"namespace for cert-manager installation
cert-manager.startupapicheck.enabledboolfalseno need for pre checks
minio.modestring"standalone"deployment mode: standalone or distributed
minio.replicasint1number of replicas. 1 for standalone, 4 for distributed
minio.persistence.sizestring"100Gi"Storage size for MinOP
minio.resources.requests.memorystring"256Mi"Memory requests for MinIO pods
minio.userslist[]Username, password and policy to be assigned to the user Default policies are [readonly
minio.bucketslist[]Initial buckets to create
mongodbcommunity.storage.dataVolumeSizestring"20Gi"Storage size for data volume
mongodbcommunity.storage.logVolumeSizestring"4Gi"Storage size for logs volume
redis.master.persistence.sizestring"2Gi"Storage size for the redis instance
redis.architecturestring"standalone"Redis deployment type: standalone or replication
engine.monitor.resourcesobject{}resources for the engine-monitor deployment
engine.realtime.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the engine-realtime deployment
engine.realtime.resourcesobject{"requests":{"cpu":"100m"}}resources for the engine-realtime deployment
engine.scheduler.resourcesobject{}resources for the engine-scheduler deployment
engine.worker.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the engine-worker deployment
engine.worker.resourcesobject{"requests":{"cpu":"200m"}}resources for the engine-worker deployment
platform.core.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the platform-core deployment
platform.core.resourcesobject{"requests":{"cpu":"200m"}}resources for the platform-core deployment
platform.sync.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the platform-sync deployment
platform.sync.resourcesobject{"requests":{"cpu":"100m"}}resources for the platform-sync deployment
platform.worker.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the platform-worker deployment
platform.worker.resourcesobject{"requests":{"cpu":"50m"}}resources for the platform-worker deployment
studio.hpaobject{"targetCpuUtilization":90}horizantal pod autoscaler configuration for the studio deployment
studio.resourcesobject{"requests":{"cpu":"100m"}}resources for the studio deployment