Setup
Get started by following below steps:
Prerequisites
- Kubernetes Cluster: You should have a running Kubernetes cluster. You can use any cloud-based or on-premises Kubernetes distribution.
- kubectl: Installed and configured to interact with your Kubernetes cluster.
- Helm: Installed for managing Kubernetes applications.
- Prometheus: You should have a prometheus installed in your cluster.
Installing Prometheus
We will setup a sample prometheus to read metrics from the ingress controller.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set alertmanager.enabled=false \
--set grafana.enabled=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
- Ingress Controller: You should have an ingress controller installed in your cluster.
Installing Ingress Controller
# Download the latest Istio release from the official Istio website.
curl -L https://istio.io/downloadIstio | sh -
# Move it to home directory
mv istio-x.xx.x ~/.istioctl
export PATH=$HOME/.istioctl/bin:$PATH
istioctl install --set profile=default -y
# Label the namespace where you want to deploy your application to enable Istio sidecar Injection
kubectl create namespace <NAMESPACE>
kubectl label namespace <NAMESPACE> istio-injection=enabled
# Create a gateway
kubectl apply -f ./playground/config/gateway.yaml -n <NAMESPACE>
- KEDA*: [Optional] You can have a KEDA installed in your cluster, else HPA can be used.
Installing KEDA
We will setup a sample KEDA to scale the target deployment.
Install
1. Install KubeElasti using helm
Use Helm to install KubeElasti into your Kubernetes cluster.
Check out values.yaml to see configuration options in the helm value file.
2. Verify the Installation
Check the status of your Helm release and ensure that the KubeElasti components are running:
You will see 2 components running.
- Controller/Operator:
elasti-operator-controller-manager-...
is to switch the traffic, watch resources, scale etc. - Resolver:
elasti-resolver-...
is to proxy the requests.
3. Define an ElastiService
To configure a service to handle its traffic via elasti, you'll need to create and apply a ElastiService
custom resource.
Here we are creating it for httpbin service.
Create a file named elasti-service.yaml
and apply the configuration.
- Replace it with the service you want managed by elasti.
- Replace it with the namespace of the service.
- Replace it with the min replicas to bring up when first request arrives. Minimum: 1
- Replace it with the cooldown period to wait after scaling up before considering scale down. Default: 900 seconds (15 minutes) | Maximum: 604800 seconds (7 days) | Minimum: 1 second (1 second)
- ApiVersion should be
apps/v1
if you are using deployments orargoproj.io/v1alpha1
in case you are using argo-rollouts. - Kind should be either
Deployment
orRollout
(in case you are using Argo Rollouts). - Name should exactly match the name of the deployment or rollout.
- Replace it with the trigger type. Currently, KubeElasti supports only one trigger type -
prometheus
. - Replace it with the trigger query. In this case, it is the number of requests per second.
- Replace it with the trigger server address. In this case, it is the address of the prometheus server.
- Replace it with the trigger threshold. In this case, it is the number of requests per second.
- Replace it with the uptime filter of your TSDB instance. Default:
container="prometheus"
. - Replace it with the autoscaler name. In this case, it is the name of the KEDA ScaledObject.
- Replace it with the autoscaler type. In this case, it is
keda
.
Demo ElastiService
4. Apply the KubeElasti service configuration
Apply the configuration to your Kubernetes cluster:
The pod will be scaled down to 0 replicas if there is no traffic.
5. Test the setup
You can test the setup by sending requests to the nginx load balancer service.
# For NGINX
kubectl port-forward svc/nginx-ingress-ingress-nginx-controller -n nginx 8080:80
# For Istio
kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80
Start a watch on the target deployment.
Send a request to the service.
You should see the pods being created and scaled up to 1 replica. A response from the target service should be visible for the curl command.
The target service should be scaled down to 0 replicas if there is no traffic for cooldownPeriod
seconds.
Uninstall
To uninstall Elasti, you will need to remove all the installed ElastiServices first. Then, simply delete the installation file.