Skip to content

1. Installation

Marcelo Reis edited this page Jun 3, 2024 · 1 revision

Requirements

Sastre requires Python 3.9 or newer. This can be verified by pasting the following to a terminal window:

% python3 -c "import sys;assert sys.version_info>(3,9)" && echo "ALL GOOD"

If 'ALL GOOD' is printed it means Python requirements are met. If not, download and install the latest 3.x version at Python.org (https://www.python.org/downloads/).

The recommended way to install Sastre is via pip. For development purposes, Sastre can be installed from the GitHub repository. Both methods are described in this section.

PIP install in a virtual environment (recommended)

Create a directory to store the virtual environment and runtime files:

% mkdir sastre
% cd sastre

Create virtual environment:

% python3 -m venv venv

Activate virtual environment:

% source venv/bin/activate
(venv) %
  • Note that the prompt is updated with the virtual environment name (venv), indicating that the virtual environment is active.

Upgrade initial virtual environment packages:

(venv) % pip install --upgrade pip setuptools

Install Sastre:

(venv) % pip install --upgrade cisco-sdwan

Verify that Sastre can run:

(venv) % sdwan --version

Notes:

  • The virtual environment is deactivated by typing 'deactivate' at the command prompt.
  • Before running Sastre, make sure to activate the virtual environment back again (source venv/bin/activate).

PIP install

With this option you will likely need to run the pip commands as sudo.

Install Sastre:

% python3 -m pip install --upgrade cisco-sdwan

Verify that Sastre can run:

% sdwan --version

GitHub install

This option is used to create a development environment (to contribute to Sastre), or to build a Sastre container (as described in the next section).

Clone from the GitHub repository:

% git clone https://github.com/CiscoDevNet/sastre

Move to the clone directory:

% cd sastre

Create virtual environment:

% python3 -m venv venv

Activate virtual environment:

% source venv/bin/activate
(venv) %
  • Note that the prompt is updated with the virtual environment name (venv), indicating that the virtual environment is active.

Upgrade initial virtual environment packages:

(venv) % pip install --upgrade pip setuptools

Install required Python packages:

(venv) % pip install -r requirements.txt

Verify that Sastre can run:

(venv) % python3 sdwan.py --version

Container install

First, proceed with the GitHub install outlined above.

Ensure you are within the directory cloned from GitHub:

% cd sastre

Then proceed as follows to build the docker container:

% docker build -t sastre .                                                                            
Sending build context to Docker daemon    220MB
Step 1/12 : ARG http_proxy
Step 2/12 : ARG https_proxy
Step 3/12 : ARG no_proxy
Step 4/12 : FROM python:3.9-alpine
 ---> 77a605933afb
<snip>

Create host directory to be mounted into the container:

mkdir sastre-volume

Start the docker container:

docker run -it --rm --hostname sastre \
 --mount type=bind,source="$(pwd)"/sastre-volume,target=/shared-data \
 sastre-pro:latest

usage: sdwan [-h] [-a <vmanage-ip>] [-u <user>] [-p <password>] [--tenant <tenant>] [--pid <pid>] [--port <port>] [--timeout <timeout>] [--verbose] [--version] <task> ...

Sastre - Automation Tools for Cisco SD-WAN Powered by Viptela

positional arguments:
  <task>                task to be performed (backup, restore, delete, migrate, attach, detach, certificate, list, show-template, show, report)
  <arguments>           task parameters, if any

optional arguments:
  -h, --help            show this help message and exit
  -a <vmanage-ip>, --address <vmanage-ip>
                        vManage IP address, can also be defined via VMANAGE_IP environment variable. If neither is provided user is prompted for the address.
  -u <user>, --user <user>
                        username, can also be defined via VMANAGE_USER environment variable. If neither is provided user is prompted for username.
  -p <password>, --password <password>
                        password, can also be defined via VMANAGE_PASSWORD environment variable. If neither is provided user is prompted for password.
  --tenant <tenant>     tenant name, when using provider accounts in multi-tenant deployments.
  --pid <pid>           CX project id, can also be defined via CX_PID environment variable. This is collected for AIDE reporting purposes only.
  --port <port>         vManage port number, can also be defined via VMANAGE_PORT environment variable (default: 443)
  --timeout <timeout>   REST API timeout (default: 300)
  --verbose             increase output verbosity
  --version             show program's version number and exit
sastre:/shared-data#

sastre:/shared-data# sdwan --version
Sastre-Pro Version 1.20.1. Catalog: 84 configuration items, 33 operational items.

sastre:/shared-data#

Notes:

  • When set, host proxy environment variables (http_proxy, https_proxy and no_proxy) are used during the build and execution of the container.
  • The container has a /shared-data volume.
  • Sastre data/ and logs/ directories are created under /shared-data.
  • A sample dcloud-lab.sh is copied to /shared-data/rc if no /shared-data/rc directory is present.
  • Directory structure:
    • /shared-data/data - Used as the vManage backup data repository
    • /shared-data/logs - Where the logs are saved
    • /shared-data/rc - Used to store 'rc' files defining environment variables used by Sastre: VMANAGE_IP, VMANAGE_USER, etc.
  • The suggested docker run command above bind-mounts the /shared-data volume, i.e. it is mapped to a host system directory. This facilitates transferring of data to/from the container (e.g. vManage backups). The host directory is relative to the location where the docker run command is executed.
  • Docker run will spin-up the container and open an interactive session to it using the ash shell. Sdwan commands (e.g. sdwan backup all, etc.) can be executed at this point. Typing 'exit' will leave the ash shell, stop and remove the container. Everything under data, rc and logs is persisted to the corresponding host system directories.