Kubernetes is a platform for managing application containers across multiple hosts. It provides lots of management features for container-oriented applications, such as auto scaling, rolling deployment, compute resource, and volume management. Same as the nature of containers, it’s designed to run anywhere, so we’re able to run it on a bare metal, in our data center, on the public cloud, or even hybrid cloud. As part of our services we have developed configurations using this technology so that our clients can deploy our CENIT integration platform in their customized clusters.
Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Exist two ways to create and interactuate with a Kubernetes cluster in Amazon EKS. The first is eksctl, a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS and the second choice is Amazon EKS in the AWS Management Console.
To install eksctl first you need to install AWS Command Line Interface (AWS CLI). You can install the AWS CLI and its dependencies on most Linux distributions by using pip, a package manager for Python.
pip3 install awscli --upgrade
Both eksctl and the AWS CLI require that you have AWS credentials configured in your environment. The aws configure command is the fastest way to set up your AWS CLI installation for general use.
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
To install eksctl download and extract the latest release of eksctl with the following command:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
Move the extracted binary to /usr/local/bin:
sudo mv /tmp/eksctl /usr/local/bin
Kubernetes uses a command line utility called \textbf{kubectl} for communicating with the cluster API server. The kubectl binary is available in many operating system package managers, and this option is often much easier than a manual download and install process. You can follow the instructions for your specific operating system or package manager in the Kubernetes documentation to install.
NOTE: You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane . For example, a 1.11 kubectl client should work with Kubernetes 1.10, 1.11, and 1.12 clusters.
Download the Amazon EKS-vended kubectl binary for your cluster's Kubernetes version from Amazon S3:
curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.9/2019-06-21/bin/linux/amd64/kubectl
Apply execute permissions to the binary:
chmod +x ./kubectl
Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
Now we can create our Amazon EKS cluster and a worker node group with the \textbf{eksctl} command line utility or using AWS Management Console.
NOTE: The choice of how the cluster should be built is left to you.
For deploy all Cenit configurations just run the following command:
kubectl apply -f ./
We have two Clusters, DevIO for develop and ProdIO for production, in both case we have differents config files. Next below we will to explain each config (objects) file:
To storage all local data generated by 3rd party service in our cluster we configured The Amazon EBS Container Storage Interface (CSI). This driver provides a CSI interface that allows Amazon EKS clusters to manage the lifecycle of Amazon EBS volumes for persistent volumes. This driver is the only one that allow to Kubernetes to do snapshots of data stored in volumes. To see how to deploy it you can follow this link: CSI
mongo-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 100Gi
The PersistentVolumeClaim (PVC) are configuration objects to store all our data, in this case to store all MongoDB data (we use the same filosophie for RabbitMQ and Redis).
NOTE: For develop purpose we test all our configuration with an instance of a containerized MongoDB service. But we offer the version of our product that use an external provider of database like Atlas or other services.
mongo-clusterip-service.yml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-cluster-ip-service
spec:
type: ClusterIP
selector:
amqp: rabbitmq
ports:
- name: consumers-port
port: 5672
targetPort: 5672
- name: gui-port
port: 15672
targetPort: 15672
Services are Kubernetes objects that provide the reliable way to access applications running on the pods. Services are what makes pods consistently accessible. Services connect Pods together, or provide access outside of the cluster. The ClusterIP service type is the default, and only provides access internally on a cluster internal IP. We use this service type for accessing internal traffic only.
Deployment object is a resource of Kubernetes that can run a set of identical pods (one or more), monitors the state of each Pod and update it as necessary, among other. Our main configuration file is cenit-deployment.yml. We have configured here the deployment of our two main services (Cenit backend and frontend), the ingress controller configuration and the "conection" with platform containers. The most important configurations inside of this are the image label and environment variables. The image label allows us to change the Cenit image to use, we have two versions (cenitio:latest and cenitio:frontend) available, the first one is the containerized version prepared only to proccess an execute all task and the second one it's only to interacting with Cenit UI. Finally, the environment variables influence the operation and configurations of each container.
cenit-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cenit-deployment-backend
spec:
replicas: 2
selector:
matchLabels:
platform: cenit-backend
template:
metadata:
labels:
platform: cenit-backend
spec:
volumes:
- name: host-docker-socket
hostPath:
path: /var/run/docker.sock
- name: host-volume
hostPath:
path: /
imagePullSecrets:
- name: regcred
containers:
- name: cenit-backend
image: cenitgroup/cenitio:latest
imagePullPolicy: Always
resources:
limits:
memory: "4Gi"
cpu: "1000m"
volumeMounts:
- name: host-docker-socket
mountPath: /var/run/docker.sock
- name: host-volume
mountPath: /host
readOnly: true
securityContext:
capabilities:
add:
- SYS_PTRACE
command: ["bundle", "exec", "unicorn", "-c", "config/unicorn.rb"]
env:
- name: NRIA_LICENSE_KEY
value:
- name: SKIP_DB_INITIALIZATION
value: "false"
- name: SKIP_RABBIT_MQ
value: "false"
- name: SCHEDULER_LOOKUP_INTERVAL
value:
- name: UNICORN_WORKERS
value:
- name: MAXIMUM_UNICORN_CONSUMERS
value:
- name: LOOKUP_SCHEDULER_OFF
value:
- name: LOAD_DELAYED_MESSAGES_ON_START
value:
- name: REDIS_HOST
value: redis-cluster-ip-service
- name: AWS_S3_REGION
value:
- name: AWS_S3_BUCKET_PREFIX
value:
- name: AWS_ACCESS_KEY_ID
value:
- name: AWS_SECRET_ACCESS_KEY
value:
- name: DEFAULT_FILE_STORE
value:
- name: HOMEPAGE
value:
- name: DOCS_URL
value:
- name: TENANT_CREATION_DISABLED
value:
- name: GMAIL_USERNAME
value:
- name: GMAIL_PASSWORD
value:
- name: MAIL_DOMAIN
value:
- name: HOST
value:
- name: RABBITMQ_BIGWIG_TX_URL
value: "amqp://<USER>:<PASSWORD>@rabbitmq-cluster-ip-service/<VHOST>"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cenit-deployment-frontend
spec:
replicas: 2
selector:
matchLabels:
platform: cenit-frontend
template:
metadata:
labels:
platform: cenit-frontend
spec:
volumes:
- name: host-docker-socket
hostPath:
path: /var/run/docker.sock
- name: host-volume
hostPath:
path: /
imagePullSecrets:
- name: regcred
containers:
- name: cenit-frontend
image: cenitgroup/cenitio:frontend
imagePullPolicy: Always
resources:
limits:
memory: "4Gi"
cpu: "1000m"
volumeMounts:
- name: host-docker-socket
mountPath: /var/run/docker.sock
- name: host-volume
mountPath: /host
readOnly: true
securityContext:
capabilities:
add:
- SYS_PTRACE
command: ["bundle", "exec", "unicorn", "-c", "config/unicorn.rb"]
env:
- name: NRIA_LICENSE_KEY
value:
- name: ENABLE_RERECAPTCHA
value:
- name: MAXIMUM_UNICORN_CONSUMERS
value:
- name: BASE_MULTIPLIER_ACTIVE_TASKS
value:
- name: SCHEDULER_LOOKUP_INTERVAL
value:
- name: UNICORN_WORKERS
value:
- name: LOOKUP_SCHEDULER_OFF
value:
- name: DELAY_TASKS
value:
- name: REDIS_HOST
value: redis-cluster-ip-service
- name: AWS_S3_REGION
value:
- name: AWS_S3_BUCKET_PREFIX
value:
- name: AWS_ACCESS_KEY_ID
value:
- name: AWS_SECRET_ACCESS_KEY
value:
- name: DEFAULT_FILE_STORE
value:
- name: HOMEPAGE
value:
- name: DOCS_URL
value:
- name: TENANT_CREATION_DISABLED
value:
- name: GMAIL_USERNAME
value:
- name: GMAIL_PASSWORD
value:
- name: MAIL_DOMAIN
value:
- name: HOST
value:
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: cenit-cluster-ip-service
spec:
selector:
platform: cenit-frontend
ports:
- port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cenit-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 800m
spec:
rules:
- host: <HOST>
http:
paths:
- path: /
backend:
serviceName: cenit-cluster-ip-service
servicePort: 8080
NOTE: If change the container image you need to change associated environment variables too , because each container have its own environment variables.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may provide load balancing, SSL termination and name-based virtual hosting. It is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration. The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file.
The current base image use to build our service it's based on newrelic/infrastructure:latest-centos, that's permit send information about performance and other parameters of interest to check constantly our cluster in New Relic service. Below is the file Dockerfile used to build backend container of the application Cenit:
FROM drecom/centos-base:7 AS rubybuild
RUN git clone git://github.com/rbenv/ruby-build.git /usr/local/plugins/ruby-build \
&& /usr/local/plugins/ruby-build/install.sh
RUN yum -y install jemalloc-devel
RUN RUBY_CONFIGURE_OPTS=--with-jemalloc ruby-build 2.5.5 /usr/local/
#------------------------------------------------------------------------
#------------------------------------------------------------------------
FROM newrelic/infrastructure:latest-centos
ENV PATH /usr/local/bin:$PATH
RUN yum -y install \
epel-release
RUN yum update -y; \
curl -sL https://rpm.nodesource.com/setup_10.x | bash -; \
yum -y install \
jemalloc-devel \
make \
gcc-c++ \
gcc \
git \
openssl-devel \
zlib-devel \
mysql-devel \
redis \
sqlite-devel \
bzip2 \
nodejs \
ImageMagick \
ImageMagick-devel
COPY --from=rubybuild /usr/local /usr/local
RUN gem update --system
RUN git clone --single-branch --branch master https://github.com/cenit-io/cenit.git /var/www/cenit
WORKDIR /var/www/cenit
RUN mkdir -p /var/www/shared/log
RUN mkdir -p /var/www/shared/pids
RUN mkdir -p /var/www/shared/sockets
ENV RAILS_ENV='production'
ENV RACK_ENV='production'
ENV UNICORN_CENIT_SERVER=true
RUN gem install bundler:1.17.3
RUN bundle install --jobs 20 --retry 5 --without development test