This repo contains a test automation suite with a variety of tests. In this readme, you'll learn about the types of tests and how to run them.
- About the Tests
- Running the Tests Locally
- Running the Tests in Evergreen
- Using a Pre-Release Version of a Dependent Library
- Manually Testing the Driver
- Writing Tests
- Testing with Special Environments
All of our test automation is powered by the Mocha test framework.
Some of the tests require a particular topology (e.g., standalone server, replica set, or sharded cluster). These tests check the topology of the MongoDB server that is being used. If the topology does not match, the tests will be skipped.
Below is a summary of the types of test automation in this repo.
Type of Test | Test Location | About the Tests | How to Run Tests |
---|---|---|---|
Unit | /test/unit |
The unit tests test individual pieces of code, typically functions. These tests do not interact with a real database, so mocks are used instead. The unit test directory mirrors the /src directory structure with test file names matching the source file names of the code they test. |
npm run check:unit |
Integration | /test/integration |
The integration tests test that a given feature or piece of a feature is working as expected. These tests do not use mocks; instead, they interact with a real database. The integration test directory follows the test/spec directory structure representing the different functional areas of the driver. Note: The .gitkeep files are intentionally left to ensure that this directory structure is preserved even as the actual test files are moved around. |
npm run check:test |
Benchmark | /test/benchmarks |
The benchmark tests report how long a designated set of tests take to run. They are used to measure performance. | npm run check:bench |
Specialized Environment | /test/manual |
The specalized environment tests are functional tests that require specialized environment setups in Evergreen. Note: "manual" in the directory path does not refer to tests that should be run manually. These tests are automated. These tests have a special Evergreen configuration and run in isolation from the other tests. |
There is no single script for running all of the specialized environment tests. Instead, you can run the appropriate script based on the specialized environment you want to use: - npm run check:atlas to test Atlas - npm run check:adl to test Atlas Data Lake - npm run check:ocsp to test OSCP - npm run check:kerberos to test Kerberos - npm run check:tls to test TLS - npm run check:ldap to test LDAP authorization |
TypeScript Definition | /test/types |
The TypeScript definition tests verify the type definitions are correct. | npm run check:tsd |
Github Actions | /test/action |
Tests that run as Github actions such as dependency checking. | Currently only npm run check:dependencies but could be expanded to more in the future. |
Code Examples | /test/integration/node-specific/examples |
Code examples that are also paired with tests that show they are working examples. | Currently npm run check:lambda to test the AWS Lambda example with default auth and npm run check:lambda:aws to test the AWS Lambda example with AWS auth. |
All of the MongoDB drivers follow the same specifications (specs). Each spec has tests associated with it. Some of the tests are prose (written, descriptive) tests, which must be implemented on a case by case basis by the developers on the driver teams. Other tests are written in a standardized form as YAML and converted to JSON, which can be read by the specialized spec test runners that are implemented in each driver.
The input test specifications are stored in test/spec
.
The actual implementations of the spec tests can be unit tests or integration tests depending on the requirements, and they can be found in the corresponding test directory according to their type. Regardless of whether they are located in the /unit
or /integration
test directory, test files named spec_name.spec.test
contain spec test implementations that use a standardized runner and spec_name.prose.test
files contain prose test implementations.
The easiest way to get started running the tests locally is to start a standalone server and run all of the tests.
Start a mongod standalone with our cluster_setup.sh script: ./test/tools/cluster_setup.sh server
.
Then run the tests: npm test
.
Note: the command above will run a subset of the tests that work with the standalone server topology since the tests are being run against a standalone server.
The output will show how many tests passed, failed, and are pending. Tests that we have indicated should be skipped using .skip()
will appear as pending in the test results. See Mocha's documentation for more information.
In the following subsections, we'll dig into the details of running the tests.
By default, the integration tests run with auth enabled and the cluster_setup.sh script defaults to starting servers with auth enabled. Tests can be run locally without auth by setting the environment
variable AUTH
to the value of noauth
. This must be a two step process of starting a server without auth enabled and then running the tests without auth enabled.
AUTH='noauth' ./test/tools/cluster_setup.sh <server>
AUTH='noauth' npm run check:test
As we mentioned earlier, the tests check the topology of the MongoDB server being used and run the tests associated with that topology. Tests that don't have a matching topology will be skipped.
In the steps above, we started a standalone server: ./test/tools/cluster_setup.sh server
.
You can use the same cluster_setup.sh script to start a replica set or sharded cluster by passing the appropriate option: ./test/tools/cluster_setup.sh replica_set
or
./test/tools/cluster_setup.sh sharded_cluster
. If you are running more than a standalone server, make sure your ulimit
settings are in accordance with MongoDB's recommendations. Changing the settings on the latest versions of macOS can be tricky. See this article for tips. (You likely don't need to do the complicated maxproc steps.)
The cluster_setup.sh script automatically stores the files associated with the MongoDB server in the data
directory, which is stored at the top level of this repository.
You can delete this directory if you want to ensure you're running a clean configuration. If you delete the directory, the associated database server will be stopped, and you will need to run cluster_setup.sh again.
You can prefix npm test
with a MONGODB_URI
environment variable to point the tests to a specific deployment. For example, for a standalone server, you might use: MONGODB_URI=mongodb://localhost:27017 npm test
. For a replica set, you might use: MONGODB_URI=mongodb://localhost:31000,localhost:31001,localhost:31002/?replicaSet=rs npm test
.
The easiest way to run a single test is by appending .only()
to the test context you want to run. For example, you could update a test function to be it.only(‘cool test’, function() {})
. Then
run the test using npm run check:test
for a functional or integration test or npm run check:unit
for a unit test. See Mocha's documentation for more detailed information on .only()
.
Another way to run a single test is to use Mocha's grep
flag. For functional or integration tests, run npm run check:test -- -g 'test name'
. For unit tests, run npm run check:unit -- -g 'test name'
. See the Mocha documentation for information on the grep
flag.
Evergreen is the continuous integration (CI) system we use. Evergreen builds are automatically run whenever a pull request is created or when commits are pushed to particular branches (e.g., main, 4.0, and 3.6).
Each Evergreen build runs the test suite against a variety of build variants that include a combination of topologies, special environments, and operating systems. By default, commits in pull requests only run a subset of the build variants in order to save time and resources. To configure a build, update .evergreen/config.yml.in
and then generate a new Evergreen config via node .evergreen/generate_evergreen_tasks.js
.
Occasionally, you will want to manually kick off an Evergreen build in order to debug a test failure or to run tests against uncommitted changes.
You can use the Evergreen UI to choose to rerun a task (an entire set of test automation for a given topology and environment). Evergreen does not allow you to rerun an individual test.
You can also choose to run a build against code on your local machine that you have not yet committed by running a pre-commit patch build.
Begin by setting up the Evergreen CLI.
- Download and install the Evergreen CLI according to the instructions in the Evergreen Documentation.
- Be sure to create
evergreen.yml
as described in the documentation. - Add the Evergreen binary to your path.
Once you have the Evergreen CLI setup, you are ready to run a build. Keep in mind that if you want to run only a few tests, you can append .only()
as described in the section above on running individual tests.
-
In a terminal, navigate to your node driver directory:
cd node-mongodb-native
-
Use the Evergreen
patch
command.-y
skips the confirmation dialog.-u
includes uncommitted changes.-p [project name]
specifies the Evergreen project. --browse opens the patch URL in your browser.evergreen patch -y -u -p mongo-node-driver-next --browse
-
In your browser, select the build variants and tasks to run.
You may want to test the driver with a pre-release version of a dependent library (e.g., bson). Follow the steps below to do so.
- Open package.json
- Identify the line that specifies the dependency
- Replace the version number with the commit hash of the dependent library. For example, you could use a particular commit for the js-bson project on GitHub:
"bson": "mongodb/js-bson#e29156f7438fa77c1672fd70789d7ade9ca65061"
- Run
npm install
to install the dependency
Now you can run the automated tests, run manual tests, or kick off an Evergreen build from your local repository.
You may want to manually test changes you have made to the driver. The steps below will walk you through how to create a new Node project that uses your local copy of the Node driver. You can modify the steps to work with existing Node projects.
- Navigate to a new directory and create a new Node project by running
npm init
in a terminal and working through the interactive prompts. A new file namedpackage.json
will be created for you. - In
package.json
, create a new dependency formongodb
that points to your local copy of the driver. For example:"dependencies": { "mongodb": "/path-to-your-copy-of-the-driver-repo/node-mongodb-native" }
- Run
npm install
to install the dependency. - Create a new file that uses the driver to test your changes. See the MongoDB Node.js Quick Start Repo for example scripts you can use.
Note: When making driver changes, you will need to run
npm run build:ts
with each change in order for it to take effect.
TODO: flesh this section out more
We use mocha to construct our test suites and chai to assert expectations.
Some special notes on how mocha works with our testing setup:
before
hooks will run even if a test is skipped by the environment it runs on.- So, for example, if your before hook does logic that can only run on a certain server version you can't depend on your test block metadata to filter for that.
after
hooks cannot be used to clean up clients because the session leak checker currently runs in anafterEach
hook, which would be executed before anyafter
hook has a chance to run
Not all tests are able to run in all environments and some are unable to run at all due to known bugs.
When marking a test to be skiped, be sure to include a skipReason
, so that it can be added to the test run printout.
// skipping an individual test
it.skip('should not run', () => { /* test */ }).skipReason = 'TODO: NODE-1234';
// skipping a set of tests via beforeEach
beforeEach(() => {
if (/* some condition */) {
this.currentTest.skipReason = 'requires <run condition> to run';
this.skip();
}
});
In order to test some features, you will need to generate and set a specialized group of environment variables. The subsections below will walk you through how to generate and set the environment variables for these features.
We recommend using a different terminal for each specialized environment to avoid the environment variables from one specialized environment impacting the test runs for another specialized environment.
Before you begin any of the subsections below, clone the drivers-evergreen-tools repo.
We recommend creating an environment variable named DRIVERS_TOOLS
that stores the path to your local copy of the driver-evergreen-tools repo: export DRIVERS_TOOLS="/path/to/your/copy/of/drivers-evergreen-tools"
.
The following steps will walk you through how to create and test a MongoDB Serverless instance.
-
Create the following environment variables using a command like
export PROJECT="node-driver"
.Note: MongoDB employees can pull these values from the Evergreen project's configuration.
Variable Name Description Project
The name of the Evergreen project where the tests will be run (e.g., mongo-node-driver-next
)SERVERLESS_DRIVERS_GROUP
The Atlas organization where you will be creating the serverless instance SERVERLESS_API_PUBLIC_KEY
The Atlas API Public Key for the organization where you will be creating a serverless instance SERVERLESS_API_PRIVATE_KEY
The Atlas API Private Key for the organization where you will be creating a serverless instance SERVERLESS_ATLAS_USER
The SCRAM username for the Atlas user who has permission to create a serverless instance SERVERLESS_ATLAS_PASSWORD
The SCRAM password for the Atlas user who has permission to create a serverless instance Remember some of these are sensitive credentials, so keep them safe and only put them in your environment when you need them.
-
Run the create-instance script:
$DRIVERS_TOOLS/.evergreen/serverless/create-instance.sh
The script will take a few minutes to run. When it is finished, a new file named
serverless-expansion.yml
will be created in the current working directory. The file will contain information about an Evergreen expansion:MONGODB_URI: xxx MONGODB_SRV_URI: xxx SERVERLESS_INSTANCE_NAME: xxx SSL: xxx AUTH: xxx TOPOLOGY: xxx SERVERLESS: xxx SERVERLESS_URI: xxx
-
Generate a sourceable environment file from
serverless-expansion.yml
by running the following command:cat serverless-expansion.yml | sed 's/: /=/g' > serverless.env
A new file named
serverless.env
is automatically created. -
Update the following variables in
serverless.env
, so that they are equivalent to what our Evergreen builds do:- Change
MONGODB_URI
to have the same value asSERVERLESS_URI
. - Add
SINGLE_MONGOS_LB_URI
and set it to the value ofSERVERLESS_URI
. - Add
MULTI_MONGOS_LB_URI
and set it to the value ofSERVERLESS_URI
.
- Change
-
Source the environment variables using a command like
source serverless.env
. -
Export each of the environment variables that were created in
serverless.env
. For example:export SINGLE_MONGOS_LB_URI
. -
Comment out the line in
.evergreen/run-serverless-tests.sh
that sourcesinstall-dependencies.sh
. -
Run the
.evergreen/run-serverless-tests.sh
script directly to test serverless instances from your local machine.
Hint: If the test script fails with an error along the lines of
Uncaught TypeError: Cannot read properties of undefined (reading 'processId')
, ensure you do not have theFAKE_MONGODB_SERVICE_ID
environment variable set.
The following steps will walk you through how to start and test a load balancer.
-
Start a sharded cluster with two mongos, so you have a URI similar to
MONGODB_URI=mongodb://host1,host2/
. The server must be version 5.2.0 or higher. Create the config server:mongod --configsvr --replSet test --dbpath config1 --bind_ip localhost --port 27217
Initiate the config server in the shell:
mongosh "mongodb://localhost:27217" --exec "rs.initiate( { _id: "test", configsvr: true, members: [ { _id: 0, host: "localhost:27217" } ] })"
Create shard replica sets:
mongod --shardsvr --replSet testing --dbpath repl1 --bind_ip localhost --port 27218 --setParameter enableTestCommands=true
mongod --shardsvr --replSet testing --dbpath repl2 --bind_ip localhost --port 27219 --setParameter enableTestCommands=true
mongod --shardsvr --replSet testing --dbpath repl3 --bind_ip localhost --port 27220 --setParameter enableTestCommands=true
Initiate replica set in the shell:
mongosh "mongodb://localhost:27218" --exec "rs.initiate( { _id: "testing", members: [ { _id: 0, host: "localhost:27218" }, { _id: 1, host: "localhost:27219" }, { _id: 2, host: "localhost:27220" }] })"
Create two mongoses running on ports 27017 and 27018:
mongos --configdb test/localhost:27217 --bind_ip localhost --setParameter enableTestCommands=1 --setParameter featureFlagLoadBalancer=true --setParameter loadBalancerPort=27050
mongos --configdb test/localhost:27217 --port 27018 --bind_ip localhost --setParameter enableTestCommands=1 --setParameter featureFlagLoadBalancer=true --setParameter loadBalancerPort=27051
.Initiate cluster on mongos in shell:
mongosh "mongodb://localhost:27017" --exec "sh.addShard("testing/localhost:27218,localhost:27219,localhost:27220")" mongosh "mongodb://localhost:27017" --exec "sh.enableSharding("test")"
-
Create an environment variable named
MONGODB_URI
that stores the URI of the sharded cluster you just created. For example:export MONGODB_URI="mongodb://host1,host2/"
-
Install the HAProxy load balancer. For those on macOS, you can install HAProxy with
brew install haproxy
. -
Start the load balancer by using the run-load-balancer script provided in drivers-evergreen-tools.
$DRIVERS_TOOLS/.evergreen/run-load-balancer.sh start
A new file name
lb-expansion.yml
will be automatically created. The contents of the file will be similar in structure to the code below.SINGLE_MONGOS_LB_URI: 'mongodb://127.0.0.1:8000/?loadBalanced=true' MULTI_MONGOS_LB_URI: 'mongodb://127.0.0.1:8001/?loadBalanced=true'
-
Generate a sourceable environment file from
lb-expansion.yml
by running the following command:cat lb-expansion.yml | sed 's/: /=/g' > lb.env
A new file name
lb.env
is automatically created. -
Source the environment variables using a command like
source lb.env
. -
Export each of the environment variables that were created in
lb.env
. For example:export SINGLE_MONGOS_LB_URI
. -
Export the
LOAD_BALANCED
environment variable to 'true':export LOAD_BALANCED='true'
-
Disable auth for tests:
export AUTH='noauth'
-
Run the test suite as you normally would:
npm run check:test
Verify that the output from Mocha includes
[ topology type: load-balanced ]
. This indicates the tests successfully accessed the specialized environment variables for load balancer testing. -
When you are done testing, shutdown the HAProxy load balancer:
$DRIVERS_TOOLS/.evergreen/run-load-balancer.sh stop
The following steps will walk you through how to run the tests for CSFLE.
-
Install MongoDB Client Encryption if you haven't already:
npm install mongodb-client-encryption
-
Create the following environment variables using a command like
export AWS_REGION="us-east-1"
.Note: MongoDB employees can pull these values from the Evergreen project's configuration.
Variable Name Description AWS_ACCESS_KEY_ID
The AWS access key ID used to generate KMS messages AWS_SECRET_ACCESS_KEY
The AWS secret access key used to generate KMS messages AWS_REGION
The AWS region where the KMS resides (e.g., us-east-1
)AWS_CMK_ID
The Customer Master Key for the KMS CSFLE_KMS_PROVIDERS
The raw EJSON description of the KMS providers. An example of the format is provided below. KMIP_TLS_CA_FILE /path/to/mongodb-labs/drivers-evergreen-tools/.evergreen/x509gen/ca.pem | | KMIP_TLS_CERT_FILE | /path/to/mongodb-labs/drivers-evergreen-tools/.evergreen/x509gen/client.pem
The value of the
CSFLE_KMS_PROVIDERS
variable will have the following format:interface CSFLE_kms_providers { aws: { accessKeyId: string; secretAccessKey: string; }; azure: { tenantId: string; clientId: string; clientSecret: string; }; gcp: { email: string; privateKey: string; }; local: { // EJSON handle converting this, its actually the canonical -> { $binary: { base64: string; subType: string } } // **NOTE**: The dollar sign has to be escaped when using this as an ENV variable key: Binary; } }
-
Start the KMIP servers:
DRIVERS_TOOLS="/path/to/mongodb-labs/drivers-evergreen-tools" .evergreen/run-kms-servers.sh
- Ensure default ~/.aws/config is present:
[default]
aws_access_key_id=AWS_ACCESS_KEY_ID
aws_secret_access_key=AWS_SECRET_ACCESS_KEY
- Set temporary AWS credentials
pip3 install boto3
PYTHON="python3" source /path/to/mongodb-labs/drivers-evergreen-tools/.evergreen/csfle/set-temp-creds.sh
Alternatively for fish users the following script can be substituted for set-temp-creds.sh:
function set_aws_creds
set PYTHON_SCRIPT "\
import boto3
client = boto3.client('sts')
credentials = client.get_session_token()['Credentials']
print (credentials['AccessKeyId'] + ' ' + credentials['SecretAccessKey'] + ' ' + credentials['SessionToken'])"
echo $PYTHON_SCRIPT | python3 -
end
set CREDS (set_aws_creds)
set CSFLE_AWS_TEMP_ACCESS_KEY_ID (echo $CREDS | awk '{print $1}')
set CSFLE_AWS_TEMP_SECRET_ACCESS_KEY (echo $CREDS | awk '{print $2}')
set CSFLE_AWS_TEMP_SESSION_TOKEN (echo $CREDS | awk '{print $3}')
set -e CREDS
-
Run the functional tests:
npm run check:test
The output of the tests will include sections like "Client Side Encryption Corpus," "Client Side Encryption Functional," "Client Side Encryption Prose Tests," and "Client Side Encryption."
To run the functional tests using the crypt shared library instead of mongocryptd, download the appropriate version of the crypt shared library for the enterprise server version here and then set the location of it in the environment variable CRYPT_SHARED_LIB_PATH
.
These steps require mongosh to be available locally. Clone it from Github.
Mongosh uses a lerna monorepo. As a result, mongosh contains multiple references to the mongodb
package
in their package.json
s.
Set up mongosh by following the steps in the mongosh readme.
mongosh contains a script that does this. To use the script, create an environment
variable REPLACE_PACKAGE
that contains a string in the form
mongodb:<path to your local instance of the driver>
. The package replacement script will replace
all occurrences of mongodb
with the local path of your driver.
An alternative, which can be useful for
testing a release, is to first run npm pack
on the driver. This generates a tarball containing all the code
that would be uploaded to npm if it were released. Then set the environment variable REPLACE_PACKAGE
with the pull path to the file.
Once the environment variable is set, run replace package in mongosh with npm run replace:package
.
mongosh's readme documents how to run its tests. Most likely, it isn't necessary to run all of mongosh's
tests. The mongosh readme also documents how to run tests for a particular scope. The scopes are
listed in the generate_mongosh_tasks.js
evergreen generation script.
For example, to run the service-provider-server
package, run the following command in mongosh:
lerna run test --scope @mongosh/service-provider-server
- Install virtualenv:
pip install virtualenv
- Source the ./activate-kmstlsvenv.sh script in driver evergreen tools
.evergreen/csfle/activate-kmstlsvenv.sh
- This will install all the dependencies needed to run a python kms_kmip simulated server
- In 4 separate terminals launch the following:
./kmstlsvenv/bin/python3 -u kms_kmip_server.py
# by default it always runs on port 5698./kmstlsvenv/bin/python3 -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/expired.pem --port 8000
./kmstlsvenv/bin/python3 -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/wrong-host.pem --port 8001
./kmstlsvenv/bin/python3 -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/server.pem --port 8002 --require_client_cert
- Set the following environment variables:
export KMIP_TLS_CA_FILE="${DRIVERS_TOOLS}/.evergreen/x509gen/ca.pem"
export KMIP_TLS_CERT_FILE="${DRIVERS_TOOLS}/.evergreen/x509gen/client.pem"
- Install the FLE lib:
npm i --no-save mongodb-client-encryption
- Launch a mongodb server
- Run the full suite
npm run check:test
or more specificallynpx mocha --config test/mocha_mongodb.json test/integration/client-side-encryption/
- Kerberos
- AWS Authentication
- OCSP
- TLS
- Atlas Data Lake
- LDAP
- Snappy (maybe in general, how to test optional dependencies)
- Atlas connectivity