Skip to content

Commit

Permalink
Add support for Rocky8, Rocky9, Ubuntu build (#61)
Browse files Browse the repository at this point in the history
* Add dependency install scripts and include docs

- Support CentOS, Rocky8, Rocky9
- Install all deps automatically in docker env
- Update README

* Use python3 explicitly

* Don't install dependencies in build.sh

These deps are now installed via scripts/dependencies/install.sh

* Add Ubuntu dependency script

* Only build RPMs if OS is centos or rocky

* Dont build DSSD libucore

* Add missing libboost-filesystem-dev package

* Add additional includes needed for gcc 10+

- need to explicitly import string
- need boost/cstdint.hpp for uint64_t

* Add Ubuntu 22.04 as supported OS

* Update dependencies and enhance docker build

- use env var to determine docker optimizations
- set env var in dockerfile
- fix missing && before rm dependencies dir
- set vault yum repos for centos (EOL)
- remove unneeded redhat-lsb dep from cent

* Install CentOS deps after sed yum vault

* Streamline rhel deps

- merge centos and rocky
- use symlink from centos to rocky

* Download sonar-scanner from env var

* Determine sonar scanner dir name from zip

- Sonar is not consistent so I can't derive the dir name any other way

* Correct var name in comments

* Fix host compile issues with GCC 10+

* Improve README

- minor formatting improvements
- add submodule instructions
- add build script instructions

* Minor formatting fix for consistency

* Add docker build instructions to README

* Add workdir to dockerfiles

- update docs to streamline use of workdir for the sake of clarity
  • Loading branch information
velomatt authored Jul 30, 2024
1 parent 09a28e4 commit 46910a1
Show file tree
Hide file tree
Showing 21 changed files with 437 additions and 54 deletions.
98 changes: 81 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,41 +2,105 @@

dss-sdk is a sub-component of the larger project, [DSS](https://github.com/OpenMPDK/DSS).

## Major Components
## Major components

- The target component is located under the [target](target) directory.
- The host component is located under the [host](host) directory.

## Build Scripts
The build scripts and README.md for them are located in the [scripts](scripts) directory.

## Dependencies

### Operating System
CentOS 7.8
### Supported operating systems

dss-sdk target and host can be built using one of the following:

- CentOS 7.8
- Rockylinux 8
- Rockylinux 9
- Ubuntu 22.04

### Install build dependencies

```bash
sudo ./scripts/dependencies/install.sh
```

### Build dss-sdk

Prior to building dss-sdk, ensure that you have checked-out submodules:

```bash
git clone --recurse-submodules https://github.com/OpenMPDK/dss-sdk
```

Alternatively:

```bash
git clone https://github.com/OpenMPDK/dss-sdk
git submodule update --init --recursive
```

#### Build dss-sdk target

```bash
./scripts/build_target.sh
```

#### Build dss-sdk host

note: dss-sdk target must be built prior to building host.

### Packages
```bash
sudo yum install epel-release
sudo yum groupinstall "Development Tools"
sudo yum install bc boost-devel check cmake cmake3 CUnit-devel dejagnu dpkg elfutils-libelf-devel expect glibc-devel jemalloc-devel Judy-devel libaio-devel libcurl-devel libuuid-devel meson ncurses-devel numactl-devel openssl-devel pulseaudio-libs-devel python3 python3-devel python3-pip rdma-core-devel redhat-lsb ruby-devel snappy-devel tbb-devel wget zlib-devel
./scripts/build_host.sh kdd-samsung-remote
```

### Python Package
#### Build dss-sdk all (target + host)

```bash
python3 -m pip install pybind11
./scripts/build_all.sh kdd-samsung-remote
```

### Ruby packages
### Build dss-sdk with docker

Required: [Install Docker Engine](https://docs.docker.com/engine/install/) in your development environment.

dss-sdk can be built with any of the below docker base images:

- centos:centos7.8.2003
- rockylinux:8-minimal
- rockylinux:9-minimal
- ubuntu:22.04

#### Build with base image

Example build using Ubuntu 22 base image:

```bash
gem install dotenv:2.7.6 cabin:0.9.0 arr-pm:0.0.11 ffi:1.12.2 rchardet:1.8.0 git:1.7.0 rexml:3.2.4 backports:3.21.0 clamp:1.0.1 mustache:0.99.8 stud:0.0.23 insist:1.0.0 pleaserun:0.0.32 fpm:1.13.1
docker run -w /dss-sdk -i -t --rm -v "$(pwd)":/dss-sdk ubuntu:22.04 /bin/bash
./scripts/dependencies/install.sh
./scripts/build_all.sh kdd-samsung-remote
```

Note: GCC version 5.1 is required to build dss-sdk. The build script for GCC can be found in the DSS repo, in the scripts directory (https://github.com/OpenMPDK/DSS/blob/master/scripts/build_gcc.sh).
#### Create a dss-sdk build image from dockerfile

Alternatively, you can build [one of the dockerfiles in scripts/docker](scripts/docker) to create an image with the build dependencies:

```bash
docker build -t dss-sdk:ubuntu22-master -f scripts/docker/ubuntu22.DOCKERFILE .
```

#### Build dss-sdk from dockerfile image

To build with the `dss-sdk:ubuntu22-master` image you have built, [as described above](#Create-a-dss-sdk-build-image-from-dockerfile):

```bash
docker run -i -t --rm -v "$(pwd)":/dss-sdk dss-sdk:ubunu22-master ./scripts/build_all.sh kdd-samsung-remote
```

## Contributing

We welcome any contributions to dss-sdk. Please fork the repository and submit a pull request with your proposed changes. Before submitting a pull request, please make sure that your changes pass the existing unit tests and that new unit tests have been added to cover any new functionality.

# DSS-SDK Pytest Framework
## DSS-SDK Pytest Framework

Unit tests for testing DataMover utility and module code. Leverages pytest-cov to generate a code coverage report

Expand All @@ -52,14 +116,14 @@ pytest-gitcov
Also refer to requirements.txt file if you would like to install these packages with pip

## Run Pytest

You must be in the dss-sdk directory

Structure:
`python3 -m pytest <path to test folder or file> -v -rA --cov=<path to root folder of dss-sdk> --cov-report term --color=yes --disable-warnings`

Here are some examples run from the dss-ecosystem directory


Run all tests by specifying the test folder
`python3 -m pytest tests -v -rA --cov=. --cov-report term --color=yes --disable-warnings`

Expand Down
13 changes: 8 additions & 5 deletions buildspec/sonar-scanner.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,19 @@ phases:
- aws s3 cp --recursive "$DSSS3URI/cache/dss-sdk/$GITHUB_RUN_NUMBER/unit/df_out/reports/" df_out/reports/ --only-show-errors
# replace the old CODEBUILD_SRC_DIR with the current one in bw-output
- sed -i -r "s|/codebuild/output/src[^/]+/src/github.com/OpenMPDK/dss-sdk|$CODEBUILD_SRC_DIR|g" bw-output/compile_commands.json
# Download the latest sonar-scanner
# Download sonar-scanner from SONAR_SCANNER_URL - Github-defined org var
- rm -rf /sonar-scanner*
- wget --no-verbose --content-disposition -E -c "https://search.maven.org/remote_content?g=org.sonarsource.scanner.cli&a=sonar-scanner-cli&v=LATEST&c=linux&e=zip"
- unzip -q sonar-scanner-cli-*.zip -d /
- rm -f sonar-scanner-cli*.zip
- export SONAR_SCANNER_FILENAME="${SONAR_SCANNER_URL##*/}"
- wget --no-verbose --content-disposition -E -c "$SONAR_SCANNER_URL"
# Get sonar-scanner root dir filename from printed contents of zip file
- export SONAR_SCANNER_DIR=$(unzip -l "$SONAR_SCANNER_FILENAME" | awk '{print $4}' | grep '/' | cut -d '/' -f 1 | sort | uniq -c | sort -nr | head -n 1 | awk '{print $2}')
- unzip -q "$SONAR_SCANNER_FILENAME" -d /
- rm -f "$SONAR_SCANNER_FILENAME"
build:
commands:
# Run sonar-scanner and ingest coverage report(s)
- |
/sonar-scanner-*-linux/bin/sonar-scanner \
/$SONAR_SCANNER_DIR/bin/sonar-scanner \
-Dsonar.branch.name="$([[ "$GITHUB_REF_NAME" != *"/merge" ]] && echo "$GITHUB_REF_NAME")" \
-Dsonar.host.url=https://sonarcloud.io \
-Dsonar.pullrequest.github.summary_comment=true \
Expand Down
12 changes: 0 additions & 12 deletions host/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -117,18 +117,6 @@ else
exit 1
fi

# Install required package dependencies
for PACKAGE in boost-devel libcurl-devel numactl-devel tbb-devel
do
if [ "$(yum list installed | cut -f1 -d' ' | grep --extended ^${PACKAGE} -c)" -eq 1 ]
then
echo "${PACKAGE} already installed"
else
echo "sudo yum install ${PACKAGE}"
yum install ${PACKAGE}
fi
done

# Build dss-sdk host
rm -rf "$OUTDIR"
mkdir "$OUTDIR"
Expand Down
15 changes: 11 additions & 4 deletions host/src/include_private/nkv_framework.h
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,8 @@
std::atomic<uint64_t> pending_io_size;
std::atomic<uint64_t> pending_io_value;
std::condition_variable cv_path;
nkv_lruCache<std::string, nkv_value_wrapper> *cnt_cache;
//nkv_lruCache<std::string, nkv_value_wrapper> *cnt_cache;
std::vector<nkv_lruCache<std::string, nkv_value_wrapper> *> cnt_cache;
pthread_rwlock_t lru_rw_lock;
std::mutex lru_lock;
std::atomic<uint64_t> nkv_num_dc_keys;
Expand Down Expand Up @@ -281,11 +282,17 @@
pthread_rwlock_init(&data_rw_lock_list[iter], NULL);
}

listing_keys = new std::unordered_map<std::size_t, std::set<std::string> > [nkv_listing_cache_num_shards];
listing_keys = new std::unordered_map<std::size_t, std::set<std::string> > (nkv_listing_cache_num_shards);
if (nkv_in_memory_exec) {
data_cache = new std::unordered_map<std::string, nkv_value_wrapper*> [nkv_listing_cache_num_shards];
data_cache = new std::unordered_map<std::string, nkv_value_wrapper*> (nkv_listing_cache_num_shards);
}
cnt_cache.resize(nkv_read_cache_shard_size);
for (auto i=0; i<nkv_read_cache_shard_size; i++) {
// Initialize class object
nkv_lruCache<std::string, nkv_value_wrapper> *cache_obj
= new nkv_lruCache<std::string, nkv_value_wrapper>(nkv_read_cache_size);
cnt_cache.push_back(cache_obj);
}
cnt_cache = new nkv_lruCache<std::string, nkv_value_wrapper> [nkv_read_cache_shard_size](nkv_read_cache_size);
nkv_num_dc_keys = 0;

// ustats
Expand Down
18 changes: 11 additions & 7 deletions host/src/nkv_framework.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -821,7 +821,7 @@ nkv_result NKVTargetPath::do_store_io_to_path(const nkv_key* n_key, const nkv_st
memcpy(c_buffer, n_value->value, n_value->length);
nkv_value_wrapper nkvvalue (c_buffer, n_value->length, n_value->length);
std::lock_guard<std::mutex> lck (lru_lock);
cnt_cache[shard_id].put(key_str, std::move(nkvvalue));
cnt_cache[shard_id]->put(key_str, std::move(nkvvalue));
}
}
}
Expand Down Expand Up @@ -969,7 +969,7 @@ nkv_result NKVTargetPath::do_retrieve_io_from_path(const nkv_key* n_key, const n
std::size_t key_prefix = std::hash<std::string>{}(key_str);
shard_id = key_prefix % nkv_read_cache_shard_size;
std::lock_guard<std::mutex> lck (lru_lock);
const nkv_value_wrapper& nkvvalue = cnt_cache[shard_id].get(key_str, cache_hit);
const nkv_value_wrapper& nkvvalue = cnt_cache[shard_id]->get(key_str, cache_hit);
//nkv_value_wrapper nkvvalue = cnt_cache[shard_id].get(key_str, cache_hit);
//nkv_value_wrapper* nkvvalue = cnt_cache[shard_id].get(key_str, cache_hit);
if (cache_hit) {
Expand Down Expand Up @@ -1058,7 +1058,7 @@ nkv_result NKVTargetPath::do_retrieve_io_from_path(const nkv_key* n_key, const n
//nkv_value_wrapper* nkvvalue = new nkv_value_wrapper(NULL, 0, 0);
smg_info(logger, "Cache put non-existence, key = %s, dev_path = %s, ip = %s", key_str.c_str(), dev_path.c_str(), path_ip.c_str());
std::lock_guard<std::mutex> lck (lru_lock);
cnt_cache[shard_id].put(key_str, std::move(nkvvalue));
cnt_cache[shard_id]->put(key_str, std::move(nkvvalue));
//cnt_cache[shard_id].put(key_str, nkvvalue);
}
}
Expand Down Expand Up @@ -1115,7 +1115,7 @@ nkv_result NKVTargetPath::do_retrieve_io_from_path(const nkv_key* n_key, const n
nkv_value_wrapper nkvvalue (c_buffer, n_value->actual_length, n_value->actual_length);
smg_info(logger, "Cache put after get, key = %s, dev_path = %s, ip = %s", key_str.c_str(), dev_path.c_str(), path_ip.c_str());
std::lock_guard<std::mutex> lck (lru_lock);
cnt_cache[shard_id].put(key_str, std::move(nkvvalue));
cnt_cache[shard_id]->put(key_str, std::move(nkvvalue));
//cnt_cache[shard_id].put(key_str, nkvvalue);
} else {
smg_warn(logger, "data key = %s, length = %u, dev_path = %s, ip = %s", key_str.c_str(), n_value->actual_length, dev_path.c_str(), path_ip.c_str());
Expand Down Expand Up @@ -1458,7 +1458,7 @@ nkv_result NKVTargetPath::do_delete_io_from_path (const nkv_key* n_key, nkv_post
int32_t shard_id = key_prefix % nkv_read_cache_shard_size;
bool cache_hit = false;
std::lock_guard<std::mutex> lck (lru_lock);
cnt_cache[shard_id].del(key_str, cache_hit);
cnt_cache[shard_id]->del(key_str, cache_hit);
}
}
}
Expand Down Expand Up @@ -2218,10 +2218,14 @@ NKVTargetPath::~NKVTargetPath() {
if (nkv_in_memory_exec) {
delete[] data_cache;
}
delete[] cnt_cache;
//delete[] cnt_cache;
for (int i=0; i<cnt_cache.size(); i++) {
delete cnt_cache[i];
}
cnt_cache.clear();
smg_info(logger, "Cleanup successful for path = %s", dev_path.c_str());


if( device_stat) {
nkv_ustat_delete(device_stat);
}
Expand Down
2 changes: 1 addition & 1 deletion host/src/remote/auto_discovery.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ bool nvme_disconnect(std::string subsystem_nqn)
*/
void read_file(const std::string& file_path, int32_t start_line, int32_t line_to_read, std::vector<std::string>& lines)
{
ifstream fh (file_path);
std::ifstream fh (file_path);
int32_t index = 1;
try{
if (fh.is_open()) {
Expand Down
28 changes: 28 additions & 0 deletions scripts/dependencies/install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#! /usr/bin/env bash
# shellcheck source=/dev/null
set -e

# Path variables
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

if [[ -e /etc/os-release ]]; then
source /etc/os-release
else
ID=unknown
fi

# Default paths in case they are not exported automatically
export PATH=$PATH:/usr/local/bin:/usr/local/sbin

for id in $ID $ID_LIKE; do
if [[ -e $SCRIPT_DIR/os/$id.sh ]]; then
echo "os: $id"
source "$SCRIPT_DIR/os/$id.sh"
source "$SCRIPT_DIR/os/common.sh"
exit 0
fi
done

printf "Non-supported distribution detected: %s\n" "$ID" >&2
echo "Aborting!"
exit 1
1 change: 1 addition & 0 deletions scripts/dependencies/os/centos.sh
44 changes: 44 additions & 0 deletions scripts/dependencies/os/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#! /usr/bin/env bash
set -e

# Path variables
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
REQUIREMENTS=$(realpath "$SCRIPT_DIR/../python/requirements.txt")

# Upgrade python3 pip to latest
python3 -m pip install pip --upgrade

# Install python modules from requirements.txt
PIP_ARGS=()
PIP_ARGS+=("-r")
PIP_ARGS+=("$REQUIREMENTS")

# Optimizations for Docker build
if [[ $DOCKER ]]
then
PIP_ARGS+=("--no-cache-dir")
fi

# Install python modules from requirements.txt via pip
INSTALL_STRING="python3 -m pip install ${PIP_ARGS[*]}"
echo "executing command: $INSTALL_STRING"
eval "$INSTALL_STRING"

# Set git config if not already set
for CONFIG in name email
do
if git config --list | grep "user.$CONFIG"
then
echo "git user.$CONFIG is configured."
else
echo "WARNING: git user.$CONFIG is not configured. Setting a temporary user.$CONFIG."
echo "You should set a proper git "user.$CONFIG" with command: git config --global user.$CONFIG <<your-details>>"
git config --global user.$CONFIG "[email protected]"
fi
done

# Set git safe.directory globally if docker
if [[ $DOCKER ]]
then
git config --global --add safe.directory '*'
fi
Loading

0 comments on commit 46910a1

Please sign in to comment.