Cookbook for managing Ceph nodes and clients
- Ceph Nautilus Release
- CentOS 7
- AlmaLinux 8
This cookbook utilizes kitchen-terraform to test deploying various parts of this cookbook in multiple nodes, similar to that in production.
- Chef Workstation
- Terraform
kitchen-terraform
- OpenStack cluster
Ensure you have the following in your .bashrc
(or similar):
export TF_VAR_ssh_key_name="$OS_SSH_KEYPAIR"
- Chef-zero node acting as a Chef Server
- Three node ceph cluster
- Each node ceph node runs mon, mgr, mds and osd services
- One cephfs client node
- The cephfs client node will mount cephfs from the ceph cluster
First, generate some keys for chef-zero and then simply run the following suite.
# Only need to run this once
$ chef exec rake create_key
$ kitchen test multi-node
Be patient as this will take a while to converge all of the nodes (approximately 15 minutes).
Unfortunately, kitchen-terraform doesn't support using kitchen console
so you will need to log into the nodes
manually. To see what their IP addresses are, just run terraform output
which will output all of the IPs.
# You can run the following commands to login to each node
$ ssh centos@$(terraform output -raw node1)
$ ssh centos@$(terraform output -raw node2)
$ ssh centos@$(terraform output -raw node3)
$ ssh centos@$(terraform output -raw cephfs_client)
# Or you can look at the IPs for all for all of the nodes at once
$ terraform output
ceph_nodes = [
10.1.100.3,
10.1.100.66,
10.1.100.45
]
cephfs_client = 10.1.100.8
chef_zero = 10.1.100.43
node1 = 10.1.100.3
node2 = 10.1.100.66
node3 = 10.1.100.45
Once you're logged into one of the nodes, you should be able to run the following commands:
$ ceph -s
cluster:
id: 7964405e-3e4a-4aee-b5a8-bf5e4f816c5d
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node3,node2
mgr: node1(active), standbys: node2, node3
mds: cephfs-1/1/1 up {0=node2=up:active}, 2 up:standby
osd: 9 osds: 9 up, 9 in
data:
pools: 2 pools, 256 pgs
objects: 24 objects, 32.6KiB
usage: 9.04GiB used, 81.0GiB / 90GiB avail
pgs: 256 active+clean
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.08817 root default
-3 0.02939 host node1
0 hdd 0.00980 osd.0 up 1.00000 1.00000
1 hdd 0.00980 osd.1 up 1.00000 1.00000
2 hdd 0.00980 osd.2 up 1.00000 1.00000
-5 0.02939 host node2
3 hdd 0.00980 osd.3 up 1.00000 1.00000
4 hdd 0.00980 osd.4 up 1.00000 1.00000
5 hdd 0.00980 osd.5 up 1.00000 1.00000
-7 0.02939 host node3
6 hdd 0.00980 osd.6 up 1.00000 1.00000
7 hdd 0.00980 osd.7 up 1.00000 1.00000
8 hdd 0.00980 osd.8 up 1.00000 1.00000
All of these nodes are configured using a Chef Server which is a container running chef-zero. You can interact with the chef-zero server by doing the following:
$ CHEF_SERVER="$(terraform output -raw chef_zero)" knife node list -c test/chef-config/knife.rb
cephfs_client
node1
node2
node3
$ CHEF_SERVER="$(terraform output -raw chef_zero)" knife node edit -c test/chef-config/knife.rb
In addition, on any node that has been deployed, you can re-run chef-client
like you normally would on a production
system. This should allow you to do development on your multi-node environment as needed. Just make sure you include
the knife config otherwise you will be interacting with our production chef server!
You do not need to use kitchen-terraform directly if you're just doing development. It's primarily useful for testing the multi-node cluster using inspec. You can simply deploy the cluster using terraform directly by doing the following:
# Sanity check
$ terraform plan
# Deploy the cluster
$ terraform apply
# Destroy the cluster
$ terraform destroy
# To remove all the nodes and start again, run the following test-kitchen command.
$ kitchen destroy multi-node
# To refresh all the cookbooks, use the following command.
$ CHEF_SERVER="$(terraform output chef_zero)" chef exec rake knife_upload
- Fork the repository on Github
- Create a named feature branch (like
username/add_component_x
) - Write tests for your change
- Write your change
- Run the tests, ensuring they all pass
- Submit a Pull Request using Github
- Author:: Oregon State University [email protected]
Copyright:: 2017-2019 Oregon State University
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.