-
Notifications
You must be signed in to change notification settings - Fork 26
Development notes
What information is required as input to the cluster/nodes.
Groups:
login
compute
control
Group/host vars:
-
Cluster name. Var
openhpc_cluster_name
. REQUIRED in environment inventory -
Slurmctld address. Var
openhpc_slurm_control_host
. Default in common:all:openhpc ={{ groups['control'] | first }}
. NB: maybe should use.internal_address
? -
Partition definitions. Var
openhpc_slurm_partitions
. Default in common:all:openhpc is single 'compute' partition. NB: requires group"{{ openhpc_cluster_name }}_compute"
in environment inventory. Could check groups during validation?? -
If using
nfs
: Varnfs_server
. Default in common:all:nfs isnfs_server_default
->"{{ hostvars[groups['control'] | first ].internal_address }}"
. -
All the "service endpoints" in common:all:defaults:
elasticsearch_address: "{{ hostvars[groups['opendistro'].0].api_address }}" prometheus_address: "{{ hostvars[groups['prometheus'].0].api_address }}" openondemand_address: "{{ hostvars[groups['openondemand'].0].api_address if groups['openondemand'] | count > 0 else '' }}"
-
All the secrets in envionrment:all:secrets - see secret role's defaults:
- grafana, elasticsearch, mysql (x2) passwords (all potentially depending on group placement)
- munge key (for all openhpc nodes)
Which roles can we ONLY run the install tasks from, to build a cluster-independent(*)/no-config image?
- It's not really cluster-independent as which features are turned on where may vary.