Installation - AWS EKS
Prerequisites
Before you begin, ensure that you have the following prerequisites in place:
AWS Elastic Kubernetes Service Cluster: You should have access to a running AWS Elastic Kubernetes Service. If you don't have one, you can create a AWS Elastic Kubernetes Service through the AWS Control Panel or by using the AWS CLI.
Helm: Helm is a package manager for Kubernetes, which simplifies the deployment and management of applications. You should have Helm installed on your AWS Elastic Kubernetes Service(AWS EKS). You can install Helm by following the official Helm installation guide.
kubectl: kubectl is a command-line tool used for managing your Kubernetes cluster. Ensure that kubectl is installed and properly configured to work with your AWS Elastic Kubernetes Service. You can install kubectl by following the official Kubernetes installation guide.
Chart installation
Add the repo and continue with your Kubernetes Platform's documentation:
helm repo add cloud-agnost https://cloud-agnost.github.io/charts
helm repo update
If you already have NGINX ingress running on your cluster, make sure to disable it's deployment with
--set ingress-nginx.enabled=false
flag
Before installation, if you need to update configuration parametes of the installation, you can configure the settings based on base values.yaml.
Then run the below command to install Agnost:
# Install the chart on Kubernetes.
# If the ingress is not enabled and need to install it then set `ingress-nginx.enabled=true`
helm upgrade --install agnost cloud-agnost/base \
--set ingress-nginx.platform=EKS
The above command installs Agnost on the default
namespace (or on your current context) and operators
namespace to install 3rd party software operators.
You can add --namespace <namespace-name>
and --create-namespace
options to install Agnost to a different namespace.
Check the pods status, make sure that mongodb, rabbitmq, and redis are running: It takes around 5 minutes (depending on your local resources and internet connection)
$> kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
engine-monitor-deployment-6d5569878f-nrg7q 1/1 Running 0 8m8s
engine-realtime-deployment-955f6c77b-2wx52 1/1 Running 0 8m8s
engine-scheduler-deployment-775879f956-fq4sc 1/1 Running 0 8m8s
engine-worker-deployment-76d94cd4c9-9hsjc 1/1 Running 0 8m8s
minio-594ff4f778-hvk4t 1/1 Running 0 8m8s
mongodb-0 2/2 Running 0 7m57s
platform-core-deployment-5f79d59868-9jrbm 1/1 Running 0 8m8s
platform-sync-deployment-7c8bf79df6-h2prc 1/1 Running 0 8m8s
platform-worker-deployment-868cb59558-rv86h 1/1 Running 0 8m8s
rabbitmq-server-0 1/1 Running 0 7m49s
redis-master-0 1/1 Running 0 8m8s
studio-deployment-7fdccfc77f-pxsfj 1/1 Running 0 8m8s
Then you can reach your app via the IP address of your ingress:
# get the IP address of the Ingress --> EXTERNAL-IP field
$> kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
agnost-ingress-nginx-controller LoadBalancer 10.245.185.76 192.168.49.2 80:30323/TCP,443:31819/TCP 7m1s
# or to get it via script:
kubectl get svc -n ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[].ip}'
Then open your browser and access to the IP address (http://192.168.49.2/studio
for the given example above) to launch Agnost Studio.
Chart Customization
Here is the helm documentation in how to customize the installation.
In a nutshell, you have 2 options:
- Set the value on the command line, e.g.:
helm upgrade --install agnost cloud-agnost/base \
--set ingress-nginx.enabled=false
- Create a values file with the changes you want to have:
# my-values.yaml
ingress-nginx:
enabled: false
minio:
mode: distributed
replicas: 4
Then, provide it on the command line:
helm upgrade --install agnost cloud-agnost/base -f my-values.yaml
Here are the values you can configure:
Key | Type | Default | Description |
---|---|---|---|
ingress-nginx.enabled | bool | true | install ingress-nginx |
ingress-nginx.controller.service.externalTrafficPolicy | string | "Local" | This needs to be local for knative ingress work properly |
ingress-nginx.controller.autoscaling.enabled | bool | true | Enable/Disable autoscaling for ingress-nginx |
ingress-nginx.controller.autoscaling.minReplicas | int | 1 | Minimum ingress-nginx replicas when autoscaling is enabled |
ingress-nginx.controller.autoscaling.maxReplicas | int | 10 | Maximum ingress-nginx replicas when autoscaling is enabled |
ingress-nginx.controller.autoscaling.targetCPUUtilizationPercentage | int | 80 | Target CPU Utilization for ingress-nginx replicas when autoscaling is enabled |
ingress-nginx.controller.autoscaling.targetMemoryUtilizationPercentage | int | 80 | Target Memory Utilization for ingress-nginx replicas when autoscaling is enabled |
ingress-nginx.controller.resources | object | {"requests":{"cpu":"100m","memory":"200Mi"}} | resources for the ingress-nginx controller |
ingress-nginx.platform | string | "" | Platform running the ingress, annotations needed for Elastic Kubernetes Service (AWS), Azure Kubernetes Service and Digital Ocean Kubernetes Possible values: [ AKS, DOKS, EKS ] |
cert-manager.namespace | string | "cert-manager" | namespace for cert-manager installation |
cert-manager.startupapicheck.enabled | bool | false | no need for pre checks |
minio.mode | string | "standalone" | deployment mode: standalone or distributed |
minio.replicas | int | 1 | number of replicas. 1 for standalone, 4 for distributed |
minio.persistence.size | string | "100Gi" | Storage size for MinOP |
minio.resources.requests.memory | string | "256Mi" | Memory requests for MinIO pods |
minio.users | list | [] | Username, password and policy to be assigned to the user Default policies are [readonly |
minio.buckets | list | [] | Initial buckets to create |
mongodbcommunity.storage.dataVolumeSize | string | "20Gi" | Storage size for data volume |
mongodbcommunity.storage.logVolumeSize | string | "4Gi" | Storage size for logs volume |
redis.master.persistence.size | string | "2Gi" | Storage size for the redis instance |
redis.architecture | string | "standalone" | Redis deployment type: standalone or replication |
engine.monitor.resources | object | {} | resources for the engine-monitor deployment |
engine.realtime.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the engine-realtime deployment |
engine.realtime.resources | object | {"requests":{"cpu":"100m"}} | resources for the engine-realtime deployment |
engine.scheduler.resources | object | {} | resources for the engine-scheduler deployment |
engine.worker.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the engine-worker deployment |
engine.worker.resources | object | {"requests":{"cpu":"200m"}} | resources for the engine-worker deployment |
platform.core.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the platform-core deployment |
platform.core.resources | object | {"requests":{"cpu":"200m"}} | resources for the platform-core deployment |
platform.sync.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the platform-sync deployment |
platform.sync.resources | object | {"requests":{"cpu":"100m"}} | resources for the platform-sync deployment |
platform.worker.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the platform-worker deployment |
platform.worker.resources | object | {"requests":{"cpu":"50m"}} | resources for the platform-worker deployment |
studio.hpa | object | {"targetCpuUtilization":90} | horizantal pod autoscaler configuration for the studio deployment |
studio.resources | object | {"requests":{"cpu":"100m"}} | resources for the studio deployment |