##Ansible variables used during provisioning
This file lists the ansible variables that can be passed at the time of commissioning a node or at a global level as described in README.md. The ansible variables can also be passed at the time of setting up a node for discovery as described in baremetal.md. The variables specified at global level are merged with variables specified for a node level operation, with latter taking precedence over the former in case of a overlap/conflict.
Setting the variable at a global level that has same value across all nodes in a cluster, can substantially reduce the amount of variables that need to specified at every node level operation and is a recommended way to set the variables when possible.
The rest of this document is split into two sections viz. Mandatory variables and Commonly used variables. Mandatory variables lists the variables that must be set before a node can be configured. Commonly used variables lists the variables that we would use to affect the default ansible behavior like deploying a specific scheduler stack or a specific networking mode.
Commonly used variables are further organized into following service specific sub-sections:
There are several variables that are made available to provide a good level of programmability in the ansible plays and the reader is encouraged to look at the plays in vendor/ansible
###Mandatory variables
- env is used to set the environment variables that need to be available to ansible tasks. A common usecase of this variable is to set the http-proxy info.
- env is specified as a JSON dictionary.
{"env": { "var1": "val1", "http_proxy": "http://my.proxy.url", "https_proxy": "http://my.proxy.url" }}
- It should be set to empty dictionary if no environment variables needs to be set.
{"env": {}}
- control_interface identifies the netdevice on the node that will carry the traffic generated by infrastructure applications like etcd, ceph and so on.
- control_interface is specified as a JSON string
{"control_interface": "eth1"}
- netplugin_if identifies the netdevice on the node that will carry the data traffic generated by the containers networked using contiv data plane.
- netplugin_if is specified as a JSON string
{"netplugin_if": "eth2"}
- service_vip identifies an available static IP address that can be used as a virtual ip to provide reachability for contiv services.
- service_vip is specified as a JSON string
{"service_vip": "192.168.2.252"}
###Optional/Commonly used variables
####Serf based Discovery
- serf_cluster_name identifies the name of the cluster that serf uses to discover other peer nodes. You may use this if there are multiple clusters in the same subnet of
control_interface
and you would like serf to only discover the nodes in a specific cluster.- serf_cluster_name is specified as a JSON string
{"serf_cluster_name": "cluster-prod-eng"}
####Scheduler stack
- scheduler_provider identifies the scheduler stack to use. We support three stacks viz.
native-swarm
,ucp-swarm
andkubernetes
. The first brings-up a swarm cluster using the stock swarm image from dockerhub. The second brings-up a ucp cluster which bundles swarm in it. And the third brings up a kubernetes cluster using the hyberkube container image.- scheduler_provider is specified as a JSON string
{"scheduler_provider": "ucp-swarm"}
- ucp_bootstrap_node_name identifies the name (as seen in
clusterctl nodes get
command) of the node to bootstrap ucp with. This is the first node that is commissioned in the cluster. This is mandatory when scheduler_provider was set toucp-swarm
- ucp_bootstrap_node_name is specified as a JSON string
{"ucp_bootstrap_node_name": "cluster-node1-0"}
- ucp_license_file identifies the path to UCP license file on the host where ansible is run. This can be used to pass the UCP license at the time of configuring UCP cluster.
- ucp_license_file is specified as a JSON string
{"ucp_license_file": "/path/to/ucp/licence"}
####Contiv Networking
-
contiv_network_mode identifies the mode of operation for netplugin. Netplugin supports two modes viz.
aci
andstandalone
. The first is used to bring-up netplugin in a Cisco APIC managed fabric deployment, while the second mode can be used when deploying netplugin with standalone Layer2/Layer3 switches.- contiv_network_mode is specified as a JSON string
{"contiv_network_mode": "aci"}
Following are the relevant variables when
contiv_network_mode
is set toaci
- apic_url specifies the url for APIC. This is a mandatory variable in aci mode.
- apic_url is specified as a JSON string
{"apic_url": "https://<apic-server-url>:443"}
- apic_username specifies the username for APIC. This is a mandatory variable in aci mode.
- apic_username is specified as a JSON string
{"apic_username": "my-user"}
- apic_password specifies the password for APIC. This is a mandatory variable in aci mode.
- apic_password is specified as a JSON string
{"apic_password": "my-password"}
- apic_leaf_nodes specifies full path of the leaf nodes connected managed by APIC. This is a mandatory variable in aci mode.
- apic_leaf_nodes is specified as a JSON string
{"apic_leaf_nodes": "topology/pod-1/node-101,topology/pod-1/node-102"}
- apic_phys_domain specifies the name of the physical domain name created in APIC.
- apic_phys_domain is specified as a JSON string
{"apic_phys_domain": "allVlans"}
- apic_epg_bridge_domain can be optionally used to provide a pre-created bridge domain. The bridge domain should have already been created under tenant
common
.- apic_epg_bridge_domain is specified as a JSON string
{"apic_epg_bridge_domain": "my-bd"}
- apic_contracts_unrestricted_mode can be optionally used to allow unrestricted communication between EPGs.
- apic_contracts_unrestricted_mode is specified as a JSON string
{"apic_contracts_unrestricted_mode": "yes"}
Following are the relevant variables when
contiv_network_mode
is set tostandalone
- fwd_mode specifies whether netplugin shall bridge or route the packet. Netplugin supports two forwarding modes viz.
bridge
androuting
.- fwd_mode is specified as a JSON string
{"fwd_mode": "routing"}
Following are the relevant variables when
scheduler_provider
is set tokubernetes
- netplugin_mode specifies whether netplugin shall operate in docker's libnetwork plugin mode or the cni mode of kubernetes. Netplugin supports two modes viz.
docker
(default) andkubernetes
- netplugin_mode is specified as a JSON string
Note: when{"netplugin_mode": "kubernetes"}
scheduler_provider
is set tokubernetes
, thenetplugin_mode
must be set tokubernetes
, otherwise it can be left unset.
####Contiv Storage TBD