.
├── stacks/ # stack for each serverless configuration/template and its associated files
├── libs/ # shared libraries
├── tools/
├── README.md
├── jest.config.js
├── jest.preset.js
├── nx.json
├── package.json
├── serverless.base.ts # base configuration for serverless
├── tsconfig.base.json
├── workspace.json
├── .editorconfig
├── .eslintrc.json
├── .gitignore
├── .husky # git hooks
├── .nvmrc
├── .prettierignore
├── .prettierrc
-
Nodejs
protip: use nvm
⚠️ Version:lts/fermium (v.14.17.x)
. If you're using nvm, runnvm use
to ensure you're using the same Node version in local and in your lambda's runtime. -
📦 Package Manager
-
(or)
-
NPM
Pre-installed with Nodejs
-
-
💅 Code format plugins
On your preferred code editor, Install plugins for the above list of tools
Install project dependencies
-
Using Yarn
```shell yarn ```
Generate a new stack
nx workspace-generator serverless <STACK_NAME>
Set the basePath of the custom domain manager for each new stack in serverless.ts file
Stack name shouldn't include special characters or whitespaces
Run with
-d
or--dry-run
flag for dry run
Generate new library
nx g @nrwl/node:lib --skipBabelrc --tags lib <LIBRARY_NAME>
Stack name shouldn't include special characters or whitespaces
Run with
-d
or--dry-run
flag for dry run
Package stack
-
To package single stack
nx run <STACK_NAME>:build --stage=<STAGE_NAME>
-
To package stack affected by a change
nx affected:build --stage=<STAGE_NAME>
-
To package all stacks
```shell nx run-many --target=build --stage=<STAGE_NAME> ```
Deploy stack to cloud
To deploy single stack
nx run <STACK_NAME>:deploy --stage=<STAGE_NAME>
To deploy stack affected by a change
nx affected:deploy --stage=<STAGE_NAME>
To deploy all stacks
```shell
nx run-many --target=deploy --all --stage=<STAGE_NAME>
```
-
To remove single stack
nx run <STACK_NAME>:remove --stage=<STAGE_NAME>
-
To remove stack affected by a change
nx affected:remove --stage=<STAGE_NAME>
-
To remove all stacks
```shell nx run-many --target=remove --all --stage=<STAGE_NAME> ```
Run tests
-
To run tests in single stack
nx run <STACK_NAME>:test --stage=<STAGE_NAME>
-
To run tests affected by a change
nx affected:test --stage=<STAGE_NAME>
-
To run tests in all stacks
```shell nx run-many --target=test --all --stage=<STAGE_NAME> ```
Run offline / locally
-
To run offlline, configure
serverless-offline
plugin as documented here and run below command```shell nx run <STACK_NAME>:serve --stage=<STAGE_NAME> ```
Understand your workspace
nx dep-graph
- Visit Serverless Documentation to learn more about Serverless framework
- Visit Nx Documentation to learn more about Nx dev toolkit
- Why NX, not Lerna? Read here from co-founder of Nx
Nx Cloud pairs with Nx in order to enable you to build and test code more rapidly, by up to 10 times.
Visit Nx Cloud to learn more and enable it
Currently not active
This repository was created to enable bi-directional sync in Mex!
It is a way to keep information in sync across multiple services/tools. Artifacts generated from one service/tool can be consumed by another service/tool and vice versa while preerving context.
For example, I have a process in which users report bugs on Slack and an APM is responsible to convert the slack reports to Jira tickets. Then the Jira Ticket is assigned to a developer and the developer is responsible to resolve the ticket. On the resolution of the ticket, the APM is responsible to update the slack report. The above process in our ideal world is bi-directional sync where I have a slack channel, jira and github in sync with each other. On every update in the slack channel, the jira ticket is updated and on every update in the jira ticket, the github issue is updated. And the flow of information is not limited in a single direction. On resolution of the ticket, the Jira ticket is updated and slack channel is updated keeping everyone in the loop at all times.
We dont want to be like Zapier where information flows in a single direction. We want to keep information flowing in both directions by default. This event includes the context of the information. For example, While sharing a google doc on slack, the comments on google doc and the threaded replies on the message are synced. So the user can get the entire context irrespective of the platform
There are seven main microservices at work here, which correspond to seven different lambdas in the AWS Lambda service. Each lambda output is queued before it hits another. The microservices are:
The EventFilter, FlowService, RuleEngine, and TemplateEngine form the Integration Logic Layer.
The job of the gatekeeper is to ensure that the integration logic is only invoked when the event is valid. It performs basic security checks and verifies the origin of requests. If the request is valid, the request goes to the serviceHandler.
The serviceHandler is the core of the integration logic. It largely has two functions:
- Convert the incoming event into the desired
MexFormat
and then invoking the integration logic. - Convert the output of the integration logic from the
MexFormat
to the service format and then invoke it using the credentials it receives from the authservice.
The eventFilter is responsible for filtering the incoming events. It is responsible for filtering the incoming events based on:
- Supported events types (service based)
- If the event was emitted by Mex itself
This microservice is actually a miniservice and performs multiple roles. It exposes the APIs for CRUD of flows, and also provides the config for executing flows. (Config includes templates, transformations, auth level etc). It is also responsible for retrieving the flows related to the event from the database.
The ruleEngine checks if the event follows the rules specified in the flow config. Basic JSON based rules are currently supported
The transformEngine, as the name suggests, is used to transform the event into the desired output format based on the flow config. (Based on the transformation logic used in action-request-helper library)
Like the FlowService, the AuthService is a miniservice and performs multiple roles. It provides the APIs for CRUD of auth of workspaces. It also takes care of all the authentication utilities(maintaining access and refresh token, revoking permissions on bot removal etc). This is the last step before the event is sent to the serviceHandler.
There is a queue for each microservice. The queue is used to ensure that the microservices are invoked in the correct order.
While talking about flows, we will using certain terminologies. The following are the terms used:
- Template
- Template Unit
- Flow
- Flow Unit
- Execution
- Execution Unit
- AuthType
- AuthScope
- Rules
- Transformations
- TLI (Top Level Identifier)