-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto scaling: Work in Progress #36
base: master
Are you sure you want to change the base?
Conversation
for _, w := range workers { | ||
status, err := w.Status() | ||
if err != nil { | ||
log.Warning.Println("error getting worker status:", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better error message plox
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much more info would you like in this error message? The blank status? or the worker perhaps?
// Collect status of each comp on each worker | ||
compMap := getCurrentInstanceState(scaler.workers) | ||
|
||
log.Info.Println("----------------------------") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks pretty for testing
@@ -136,7 +142,7 @@ func (mgr *ActionManager) HandleDirtyState() error { | |||
correctCompID := worker.ComponentID{ | |||
User: activeComp.User, | |||
Repo: activeComp.Repo, | |||
Hash: correctHash, | |||
Hash: correctHash.hash, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
weird naming here
return errors.New("Something weird happened.") | ||
} | ||
|
||
func (mgr *ActionManager) findWorkerToDeployTo(compID worker.ComponentID) (*worker.V9Worker, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this function already exist in this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function ensures that the worker it returns doesn't have the component running on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much closer! I think the only big problem remaining is the ensureNWorkersAreRunning
function
workerIDs := make([]string, len(scaler.workers)) | ||
for i := range scaler.workers { | ||
name := fmt.Sprintf("worker_%d", i) | ||
id, err := scaler.driver.FindWorkerID(name) | ||
if err != nil { | ||
log.Error.Println("error getting worker id:", err) | ||
continue | ||
} | ||
workerIDs[i] = id | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now calculate this "worker name" info in a bunch of places. We should pull it out to a helper for sure
compMap[cID].averageStats.Hits += componentStats.Hits | ||
compMap[cID].averageStats.Hits /= float64(compMap[cID].instanceCount) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this divide the number every time you increment it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the component is already in the map we need to update the average. So this is like a rolling average.
|
||
dirtyStateNotifier: dirtyStateNotifier, | ||
} | ||
|
||
//Thread for handling hash changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
space before the beginning of comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait do we want a space or not?
|
||
mgr.NotifyComponentStateChanged() | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add comment: should we be batching hash updates in one lock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking this as well. The question is like, do we want a thread for each channel. One thread will be updated by the autoscaler periodically. The other will be changing the hash as needed.
No description provided.