We can use Vault's SSH secrets engine to generate signed certificates to access your machines via SSH.
This module simply sets up the roles and CAs for you. You will still need to write the appropriate policies for your users to generate the SSH certificates.
You must have Terraformed the core module first. In addition, you must have at least initialised and unsealed the Vault servers.
Refer to the documentation on the Terraform Vault provider for details on how you can provide a Vault token for this Terraform operation. In general, you might want to do this with a Root token.
After you have applied this module, a key will be set in Consul's KV store. The default
user_data
scripts of the Core's servers and clients will check for the presence of this
key in Consul to configure themselves accordingly.
You can update the servers for Consul and Nomad as you would do usually. Remember to do this one by one, especially for Consul because if more servers than the Raft consensus that Consul uses goes down, the Consul cluster will become unavailable and new servers will not be able to configure themselves.
However, for Vault, you must take care to ensure the following while you are updating them:
- At least one Vault instance must be unsealed. Otherwise the new Vault servers cannot get the certificate.
- You must make sure to do this one instance at a time.
- Make sure you unseal new instances as they get are launched.
There is no way to restrict the address that a signed key is able to access via SSH. In order to allow more granularity in controlling the types of servers a user can SSH into, this module mounts four SSH secrets engine, one for each type of servers provisioned by the core module:
- Consul Server
- Vault Server
- Nomad Server
- Nomad Client
You can use the mount paths for each secret engine to control access.
For each mount point, the role default
is created.
This module does not create the policies that allow users to access the SSH secrets engine. Thus, by default, no user except for root token holders will be able to access the key signing facility.
For example, to allow a user to access Nomad clients mounted at ssh_nomad_client
with the
default
role, the following policy would work:
path "ssh_nomad_client/sign/default" {
capabilities = ["create", "update"]
}
The vault ssh
command is a helper script
to help automate the process.
In general, you will need to do the following:
- Sign your public key with the private key of the type of servers you want to access using Vault's API.
- SSH into the machine using a combination of your private key and the signed public key
For example, assuming the default mount point for Nomad Clients and we are using the default
SSH private key at ~/.ssh/id_rsa
and the public key at ~/.ssh/id_rsa.pub
,
we can do the following:
vault ssh \
-mode ca \
-mount-point "ssh_nomad_client" \
-role default \
[email protected]
If you have a new "server type" or a different category of servers to control access to, you can
make use of the automated bootstrap and configuration that this repository. You can always configure
sshd
manually if you elect not to do so.
For example, you might want to add a separate cluster of Nomad clients and have their SSH access control be done separately.
The following pre-requisites must be met when you want to make use of the automation:
- You should install the bootstrap script using the Ansible role that is included by default using the default Packer images for the Core AMIs.
- Your AMI must have Consul installed and configured to run Consul agent. Installation of Consul agent can be done using this module and Consul Agent can be started and run using this module.
- You need to mount a new instance of the Vault SSH secrets engine.
- You need to create the appropriate keys in Consul KV store so that the bootstrap script will have the necessary information to bootstrap.
- You will need to run the bootstrap script in the instance at least once after Consul Agent is configured and running. By default, the script is installed to
/opt/vault-ssh
by the Ansible role. You can then run/opt/vault-ssh --type ${server_type}
. Use the--help
flag for more information. - You will need to write the appropriate policies for your users to access the new secrets engine and its role.
For more information and examples, refer to the Packer templates and user_data
scripts for
the various types of servers in the core module.
This module has a sub-module that can facilitate this process.
The default bootstrap script looks
under the path ${prefix}vault-ssh/${server_type}
. The default prefix is terraform/
.
First, it looks to see if ${prefix}vault-ssh/${server_type}/enabled
is set to yes
.
Next, it looks for the path where the SSH secrets engine is mounted at the key
${prefix}vault-ssh/${server_type}/path
.
The example below will show how you can configure the SSH secrets engine and values needed in Consul:
module "additional_nomad_clients" {
source = "./ssh-engine"
enabled = "yes"
path = "additional_nomad_clients"
description = "Additional Nomad Client"
ssh_user = "..."
ttl = "..."
max_ttl = "..."
role_name = "additional_nomad_clients"
}
resource "consul_key_prefix" "nomad_client" {
depends_on = [module.additional_nomad_clients]
path_prefix = "${var.consul_key_prefix}vault-ssh/additional_nomad_clients/"
subkeys = {
enabled = "yes"
path = "additional_nomad_clients"
}
}
Refer to INOUT.md