Why does Tekton pipelines have a folder called tekton
? Cuz we think it would be cool
if the tekton
folder were the place to look for CI/CD logic in most repos!
We dogfood our project by using Tekton Pipelines to build, test and release
Tekton Pipelines! This directory contains the
Tasks
and
Pipelines
that we use.
Official releases are performed from
the dogfooding
cluster
in the tekton-releases
GCP project.
This cluster already has the correct version of Tekton installed.
To make a new release:
- (Optionally) Apply the latest versions of the Tasks + Pipelines
- (If you haven't already) Install
tkn
- Run the Pipeline
- Create the new tag and release in GitHub (see one of way of doing that here).
- Add an entry to the README at
HEAD
for docs and examples for the new release (README.md#read-the-docs). - Update the catalog repo test infrastructure to use the
new release by updating the
RELEASE_YAML
link in e2e-tests.sh.
To use tkn
to run the publish-tekton-pipelines
Task
and create a release:
-
Pick the revision you want to release and update the
resources.yaml
file to add aPipelineResoruce
for it, e.g.:apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: tekton-pipelines-vX-Y-Z spec: type: git params: - name: url value: https://github.com/tektoncd/pipeline - name: revision value: revision-for-vX.Y.Z-invalid-tags-boouuhhh # REPLACE with the commit you'd like to build from (not a tag, since that's not created yet)
-
To run against your own infrastructure (if you are running in the production cluster the default account should already have these creds, this is just a bonus - plus
release-right-meow
might already exist in the cluster!), also setup the required credentials for therelease-right-meow
service account, either:- For
the GCP service account
[email protected]
which has the proper authorization to release the images and yamls in ourtekton-releases
GCP project - For your own GCP service account if running against your own infrastructure
- For
the GCP service account
-
Connect to the production cluster:
gcloud container clusters get-credentials dogfooding --zone us-central1-a --project tekton-releases
-
Run the
release-pipeline
(assuming you are using the production cluster and all the Tasks and Pipelines already exist):# Create the resources - i.e. set the revision that you wan to build from kubectl apply -f tekton/resources.yaml # Change the environment variable to the version you would like to use. export TEKTON_VERSION=vX.Y.Z export TEKTON_RELEASE_GIT_RESOURCE=tekton-pipelines-git-vX-Y-Z export TEKTON_BUCKET_RESOURCE=pipeline-tekton-bucket export IMAGE_REGISTRY=gcr.io/tekton-releases # Double-check the git revision that is going to be used for the release: kubectl get pipelineresource/$TEKTON_RELEASE_GIT_RESOURCE -o=jsonpath="{'Target Revision: '}{.spec.params[?(@.name == 'revision')].value}{'\n'}" # Execute the release pipeline. # By default this will tag the release as Pipelines' latest. If you would like to prevent # this from happening add --param=releaseAsLatest="false" tkn pipeline start \ --param=versionTag=${TEKTON_VERSION} \ --param=imageRegistry=${IMAGE_REGISTRY} \ --serviceaccount=release-right-meow \ --resource=source-repo=${TEKTON_RELEASE_GIT_RESOURCE} \ --resource=bucket=${TEKTON_BUCKET_RESOURCE} \ --resource=builtBaseImage=base-image \ --resource=builtEntrypointImage=entrypoint-image \ --resource=builtKubeconfigWriterImage=kubeconfigwriter-image \ --resource=builtCredsInitImage=creds-init-image \ --resource=builtGitInitImage=git-init-image \ --resource=builtControllerImage=controller-image \ --resource=builtWebhookImage=webhook-image \ --resource=builtDigestExporterImage=digest-exporter-image \ --resource=builtPullRequestInitImage=pull-request-init-image \ --resource=builtGcsFetcherImage=gcs-fetcher-image \ --resource=notification=post-release-trigger \ pipeline-release
TODO(#569): Normally we'd use the image PipelineResources
to control which
image registry the images are pushed to. However since we have so many images,
all going to the same registry, we are cheating and using a parameter for the
image registry instead.
Sometimes we'll find bugs that we want to backport fixes for into previous releases or discover things that were missing from a release that are required by upstream consumers of a project. In that case we'll make a patch release. To make one:
- Create a milestone to track issues and pull requests to include in the release, e.g. v0.12.1
- The issues when possible should first be fixed and merged into master. As they
are fixed, add the issues to the milestone and tag them with
needs-cherry-pick
. - Create a branch for the release named
release-<version number>x
, e.g.release-v0.13.0x
and push it to the repo https://github.com/tektoncd/pipeline (you may need help from an OWNER with permission to push). - Use git cherry-pick to cherry pick the
fixes from master into the release branch you have created (use
-x
to include the original commit information). - Check that you have cherry picked all issues in the milestone and look for any
pull requests you may have missed with with
needs-cherry-pick
. - Remove
needs-cherry-pick
from all issues that have been cherry picked. - Create an official release for the patch, with the patch version incremented
- Close the milestone.
The nightly release pipeline is triggered nightly by Tekton.
This Pipeline uses:
To start from scratch and use these Pipelines and Tasks:
- Install Tekton
- Setup the Tasks and Pipelines
- Create the required service account + secrets
- Setup post-processing
# If this is your first time installing Tekton in the cluster you might need to give yourself permission to do so
kubectl create clusterrolebinding cluster-admin-binding-someusername \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
# Example, Tekton v0.9.1
export TEKTON_VERSION=0.9.1
kubectl apply --filename https://storage.googleapis.com/tekton-releases/previous/v${TEKTON_VERSION}/release.yaml
Add all the Tasks
to the cluster, including the
golang
Tasks from the
tektoncd/catalog
, and the
release Tasks from
tektoncd/plumbing
.
Use a version of the tektoncdcatalog
tasks that is compatible with version of Tekton being released, usually master
.
Install Task from plumbing too:
# Apply the Tasks we are using from the catalog
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/build.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/tests.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/plumbing/master/tekton/resources/release/
Apply the tasks from the pipeline
repo:
# Apply the Tasks and Pipelines we use from this repo
kubectl apply -f tekton/publish.yaml
kubectl apply -f tekton/release-pipeline.yaml
kubectl apply -f tekton/release-pipeline-nightly.yaml
# Apply the resources - note that when manually releasing you'll re-apply these
kubectl apply -f tekton/resources.yaml
Tasks
and Pipelines
from this repo are:
publish.yaml
- ThisTask
useskaniko
to build and publish base images, and usesko
to build all of the container images we release and generate therelease.yaml
release-pipeline.yaml
- ThisPipeline
uses thegolang
Task
s from thetektoncd/catalog
andpublish.yaml
'sTask
.
In order to release, these Pipelines use the release-right-meow
service account,
which uses release-secret
and has
Storage Admin
access to
tekton-releases
and
tekton-releases-nightly
.
After creating these service accounts in GCP, the kubernetes service account and secret were created with:
KEY_FILE=release.json
GENERIC_SECRET=release-secret
ACCOUNT=release-right-meow
# Connected to the `prow` in the `tekton-releases` GCP project
GCP_ACCOUNT="$ACCOUNT@tekton-releases.iam.gserviceaccount.com"
# 1. Create a private key for the service account
gcloud iam service-accounts keys create $KEY_FILE --iam-account $GCP_ACCOUNT
# 2. Create kubernetes secret, which we will use via a service account and directly mounting
kubectl create secret generic $GENERIC_SECRET --from-file=./$KEY_FILE
# 3. Add the docker secret to the service account
kubectl apply -f tekton/account.yaml
kubectl patch serviceaccount $ACCOUNT \
-p "{\"secrets\": [{\"name\": \"$GENERIC_SECRET\"}]}"
Post-processing services perform post release automated tasks. Today the only
service available collects the PipelineRun
logs uploads them to the release
bucket. To use release post-processing services, the PipelineResource in
resources.yaml
must be configured with a valid targetURL in the
cloud event PipelineResource
named post-release-trigger
:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: post-release-trigger
spec:
type: cloudEvent
params:
- name: targetURI
value: http://el-pipeline-release-post-processing.default.svc.cluster.local:8080 # This has to be changed to a valid URL
The targetURL should point to the event listener configured in the cluster.
The example above is configured with the correct value for the dogfooding
cluster, using the event listener pipeline-release-post-processing
.
Some supporting scripts have been written using Python3:
- koparse - Contains logic for parsing
release.yaml
files created byko
In order to run ko
, and to be able to use a cluster's default credentials, we
need an image which contains:
ko
golang
- Required byko
to buildgcloud
- Required to auth with default namespace credentials
The image which we use for this is built from tekton/ko/Dockerfile.
go-containerregistry#383
is about publishing a ko
image, which hopefully we'll be able to move it.