This repository contains tools and helm charts to help deploy the Elastck stack on Kubernetes in Azure Container Service (ACS). You can now try this solution template in region: East US
, South Central US
and West Europe
- Deploy a Kubernetes cluster on Azure.
- Deploy a Virtual Machine served as the Controller Node to manage and configure Kubernetes cluster on Azure.
- Register Controller Node's FQDN as the entry to Kubernetes dashbord.
- Authentication supported for Kubernetes dashbord:
- Username / Password
- Azure Active Directory OAuth 2.0
- Deploy a Azure Container Registry if no public registry is provided.
- Build docker images for Elastic Stack and push images to the Azure Container Register. If public registry that stores docker images for Elastic Stack is provided, this step is skipped.
- Install Elastic Stack defined as Helm Charts on Kubernetes.
-
An Azure subscription. If you do not have an Azure subscription, you can sign up for a Azure Free Trial Subscription
-
Login to your Azure portal.
-
Follow tutorial Create Azure Service Principal using Azure portal to create an Azure Service Principal and assign it
Contributor
role access to your subscription.- Assign application a contributor role to your subscription. The subsciption is the one where you will deploy the Elastic Stack.
Note:
Application ID
,Password
andTenant ID
will be used in later stages of the deployment. -
Go to Azure Marketplace, find
Elastic Stack on Kubernetes
solution template and clickCreate
. -
In
Basics
panel,Controller Username
andController Password
need to be valid Ubuntu credential and will be used to access Kibana.Password must be at least 12 characters long and contain at least one lower case, upper case, digit and special character.
Resource Group
should be a new or an empty one to create your Kubernetes.Note: Due to Azure Container Service - Kubernetes (AKS) in preview isn't available across all regions globally. Deployments in following regions have been verified:
East US
,South Central US
andWest Europe
. More regions will be supported as AKS enters general availability. Not all VM sizes are supported across all regions. You can check product availabilities from Azure products available by region -
In
Common Settings
panel, provide the following:-
Dns prefix
- The DNS name prefix of your Kubernetes controller. Thedns prefix
and region location will format your Kubernetes dashboard host name. So thedns prefix
andlocation
pair must be globally unique. -
Registry url
- The URL of a public registry that hostselasticsearch
,kibana
andlogstash
docker images. If this field is empty, the solution will automatically create an Azure Container Registry instance.
In the following field, you need to enter your Azure Event Hub connect information. If you want the logstash to get logs from log shipper instead of Azure Event hub, keep the
Event hub namespace
/key name
/key value
asundefined
.The Event hub namespace, key name, key value and event hubs can format the event hub's connection string:
Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=<key-name>;SharedAccessKey=<key-value>;EntityPath=<eventhub-name>
. The key should be given access withlisten
.Event hub namespace
- e.g. "myeventhub".Event hub key name
- event hubSETTINGS
findShared access policies
e.g. "RootManageSharedAccessKey".Event hub key value
- SAS policy key value.List of event hubs
- event hubENTITIES
findEvent Hubs
and list the event hubs from which you'd pull events e.g. "insights-logs-networksecuritygroupevent,insights-logs-networksecuritygrouprulecounter". Event hubs in the list must be existed and are comma seperated.
If you are pulling events out of various event hubs with different partition counts, you are advised to deploy multiple instances of the solution.
-
Event hub partition count
- partition count of event hubs (all listed event hubs must have the same partition count). -
Thread wait interval(s)
- logstash event hub plugin thread wait interval in seconds. -
Data node storage account sku
- storage account sku used by Elasticsearch data node. -
Authentication Mode
- authentication mode for accessing Kubernetes dashboard.-
Basic Authentication
mode usesController Username
andController Password
. -
Azure Active Directory
mode uses Azure AD service principal for authentication. You need to provide your service principal information which you get at Step 1:Azure AD client ID
- Application IDAzure AD client secret
- Your generated keyAzure AD tenant
- Tenant ID
-
-
-
In
Kubernetes Cluster Settings
panel, provide the following:Agent Count
- number of agent nodes of Kubernetes clusterAgent Node Size
Master Count
- number of masters of Kubernetes cluster
-
In
Security Settings
panel, provide the following:You can generate the SSH public key/private key pair using js-keygen
SSH public key
- ssh public key for controller node to talk to Kubernetes clusterBase64 encoded SSH private key
- base64 encoded ssh private key
The
Service principal client ID
andService principal client secret
are used to create and manage the Kubernetes cluster, they can be the client id and secret you get from Step 1. Ensure the Service principal used here has contributor access to your subscription and in the same AAD tenant as your subscription.Service principal client ID
- Application IDService principal client secret
- Your generated key
-
Click OK in Summary panel and create the solution.
The creation may cost around half an hour. You can continue the next step while the creation.
-
If you choose the AAD mode to login your Kubernetes dashboard in step 4, You need to set the redirect information in Azure Service Principal you created in step 1.
-
Go to your Azure Service Principal: Click
Azure Active Directory
->App registrations
, search your Service Princial name and click it. -
Spell out your Kubernetes dashboard host name and note it as
<host-name>
. The format should behttp://<dns-prefix>control.<resource-location>.cloudapp.azure.com
.Both
dns-prefix
andresource-location
are set inBasic Panel
.dns-prefix
is specified inBasic Settings
,resource-location
is the region where you deploy your Elastic Stack. Deployments in following regions have been verified:East US
,South Central US
andWest Europe
. -
Set the Sign-on URL: In the
Settings
page, clickProperties
, set theHome page URL
to<host-name>
you spelled out. ClickSave
. -
Set the redirect URL: In the
Settings
page, clickReply URLs
, remove the exiting URL, add URL<host-name>/callback
. ClickSave
. -
Grant your Service Principal permissions: In the
Settings
page, clickRequired permissions
->Windows Azure Active Directory
, tickRead all users' basic profiles
andSign in and read user profile
. ClickSave
inEnable Access
pane thenGrant Permissions
inRequired permissions
pane. ClickYes
to confirm the action.
-
After the deployment succeeds, you can find the Kubernetes dashboard and kibana/elasticsearch/logstash endpoints
-
You can access your Kubernetes dashboard at: http://<dns-prefix>control.<resource-location>.cloudapp.azure.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#!/overview?namespace=elk-cluster-ns
The namespace is
elk-cluster-ns
. -
Find kibana/logstash endpoints at
Discovery and Load Balancing
->Services
on your Kubernetes dashboard.kibana dashboard's credential is the same as controller you specified in Basic Setting.
-
To manage the Kubernetes cluster, you can use
kubectl
on controllervm.The SSH credential is the same specified in Basic Setting.
The solution supports two ways to ship logs to Elastic Stack:
- Ingest logs from event hub(s) by logstash input plugin for data from Event Hubs. You need to define index pattern wad in Kibana. Index Patterns. To learn more about Logstash input plugin for data from Event Hubs
- Log shippers e.g. Filebeat
- For resource deployment failure, you can find more information from Azure Portal.
- For solution template failure, you can extract logs by ssh to
controllervm
. Deployment log is at/tmp/output.log
.
- Access kubernetes using web UI (dashboard)
- Manage Kubernetes using kubectl
- Scale agent nodes in a Container Service cluster
- Communication between Kubernetes master and node
- Ship log to logstash using log shipper filebeat
- Azure Event Hubs
- Stream Azure Diagnostic Logs to an Event Hubs Namespace
This project is under MIT license.
config/openidc.lua
is derived from https://github.com/pingidentity/lua-resty-openidc with some modifications to satisfy requirements and this file (config/openidc.lua
) is under Apache 2.0 license.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.