forked from cloudfoundry/docs-bosh
-
Notifications
You must be signed in to change notification settings - Fork 0
/
deploying-step-by-step.html.md.erb
87 lines (63 loc) · 4.33 KB
/
deploying-step-by-step.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
title: Deploying Step-by-step
---
The Director will do the following [steps](https://www.youtube.com/watch?v=ay6GjmiJTPM) when `bosh deploy` (or its related commands such as start, stop and recreate) command runs:
1. Check if there is a deployment with the name specified by the deployment manifest
- if no, create a deployment
- if yes, lock the deployment so that no other operation can modify it
1. Validate the deployment manifest syntactically and semantically
- if invalid, return an error to the user describing the problem(s)
1. Contact all existing VMs associated with this deployment to determine their network, configuration, and job configurations
- if the Director cannot contact all VMs, return an error to the user. This results in a "Timed out sending `get_state` error" during the 'Binding existing deployment' stage. At this point the operator is expected to use `bosh cck` command to determine why certain VMs are not accessible.
1. Determine requested networking changes to the existing and new VMs
- if network changes cannot be resolved (e.g. currently the Director does not support swapping of static IP reservations), return an error to the user
- if network changes require more IPs than the deployment's networks allow, return an error to the user
1. Delete instance groups that are no longer specified by the deployment manifest
- issue unmount_disk Agent call for attached disks
- issue delete_vm CPI call for each VM
- orphan persistent disks
1. Create compilation worker VMs based on as specified by `compilation` section
- issue create_vm CPI call
1. Determine release packages dependency graph and compile each package on compilation worker VMs
- issue compile_package Agent call for each package
1. Delete all compilation worker VMs
- issue delete_vm CPI call
1. Create empty VMs for new instance groups
1. Create empty VMs for instance groups that increased in instance size
1. Update each one of the instance groups:
Subset of instances (within an instance group) is selected to be updated first based on the update options for the instance group or global update options. That group of instances are called canaries.
Even if only one job or package changed in the instance group, stopping and starting procedure will apply to all of the jobs on the instances in that group. One of the future enhancements is to make this procedure more surgical and only affect jobs that have changed.
1. check if the instance previously existed
- if no, select a VM and assign assign it to be this instance
- if yes, check to see what has changed since last time it was updated
1. download updated jobs and packages onto the VM
- issue prepare Agent call
1. [run drain and stop scripts to safely stop processes on the VM](job-lifecycle.html#stop)
- issue drain Agent call
- issue stop Agent call
1. take persistent disks snapshot associated with the job instance
- issue take_snapshot CPI call if Director has snapshotting enabled
1. check if the instance group still uses the same stemcell
- if no, create a new VM based on a correct stemcell
- issue delete_vm CPI call
- issue create_vm CPI call
- if yes, do nothing
1. check if the instance group's network configuration changed
- if no, do nothing
- if yes, reconfigure running VM to match new configuration
- issue delete_vm CPI call
- issue create_vm CPI call
1. update DNS A record for this instance with new IP
1. check if the instance group's persistent disk changed
- if no, do nothing
- if yes, create a new persistent disk with correct size and type and copy data from the old persistent disk
- issue create_disk CPI call for the new disk
- issue attach_disk CPI call on a new disk
- issue mount_disk Agent call on a new disk
- issue migrate_disk Agent call on a new disk
- orphan the old disk
1. configure VM to have new set of jobs
- issue apply Agent call
1. [start processes on the VM and wait up to specified amount of time by the `update_watch_time` or `canary_watch_time`](job-lifecycle.html#start)
- issue start Agent call
- issue get_state Agent call until job state is running or times out