We track project progress using a Waffle Board and identify major goals in roadmap.md
, go check them out! If you are new to the project, check out our first timer issues, which are cross-listed on up-for-grabs.net.
When we say service, we actually mean a microservice server running in concert with other services. All of the services that we define end up being run as a cluster of networked containers. Each of our services have a few common characteristics:
The best place to start reading about any service is to open its server.go
file. All go programs (that aren't packages) define func main()
as their starting point, and for us that's always defined in server.go
. Often you'll see a function called NewServerRoutes
that defines any and all outward-facing http endpoints. These endpoints define what the service can do, and working backward from there is a great way to understand a service.
All services support at least some degree of configuration via a config
struct defined in a file called config.go
. Services use the config package to extract the values of this configuration from environment variables, mapping config.FieldName
in the service to an all-caps-snake-case FIELD_NAME
environment variable. More info can be found in the config package.
Logrus is our logger of choice. We follow Dave Cheney's Advice when it comes to logging, using only log.info
and log.debug
commands.
We support the idea of reproducible builds, and as such vendor (that is, write copies) all of our dependencies into version control. This means that it should be possible to rewind git history to any point in time, run go build
and get a functioning binary (presuming the build process worked at that point in time). godep is what we're currently using, but we're keeping a close eye on dep, as a potential replacement.
Services are always shipped as docker images.
The last step of a passing CI test is to build & push a new docker image to docker hub with the latest
tag.
To perform the work of handling a request, all servers register a "handler" function that often delegates the majority of their work to a "request" function to actually to the thing in question. This pattern is for a few reasons, first & foremost, each request defines whatever parameters it accepts as a struct, giving clear documentaiton of ways to modify the request. Secondly request functions satisfy the form specified by the rpc
package. By isolating business logic in request functions, we provide strong garuntees that behaviour will be the same for http handlers & rpc requests, while also providing a clear abstraction for testing.
Services need to talk to each other, they do so via a package called rpc
that allows us to call go functions on another service without having to worry about
When storing information, we often go to great lengths to accept the ipfs-datastore interface as a point of storage, this is intended as future-proofing for an eventual transition to distributed versions of these same services, and allowing interchangability of the underlying datastore.
We use kubernetes in production to orchestrate containers. Contact b5 if you'd like to chat production details.
The best way to get data together running locally is to clone the context repo and use docker-compose
to spin up all the necessary backend services. docker-compose up
will download all the necessary data together images in a single terminal, hook them together via networking, and will spin up a dev version of the webapp to interact with the platform. Many other services come with docker-compose.yml
files that outline the miniumum number of other services needed to make a sensible working version of the host service.
Each repository should carry with it its own roadmap, defined by milestones. Check each repo's README.md
for details