Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add API to update storage config on an instance #27

Closed
wants to merge 12 commits into from

Conversation

harsh-px
Copy link

Currently has openstorage vendored in from libopenstorage/openstorage#1200

will update to master once merged.

Signed-off-by: Harsh Desai [email protected]

@harsh-px harsh-px self-assigned this Aug 20, 2019
@codecov-io
Copy link

codecov-io commented Aug 20, 2019

Codecov Report

Merging #27 into master will increase coverage by 0.48%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master     #27      +/-   ##
=========================================
+ Coverage    2.83%   3.31%   +0.48%     
=========================================
  Files          10      11       +1     
  Lines        2470    2534      +64     
=========================================
+ Hits           70      84      +14     
- Misses       2396    2446      +50     
  Partials        4       4
Impacted Files Coverage Δ
vsphere/storagemanager/vsphere.go 100% <ø> (ø)
vsphere/vsphere.go 0% <ø> (ø) ⬆️
azure/storagemanager/azure.go 100% <100%> (ø) ⬆️
gce/gce.go 1.23% <0%> (ø) ⬆️
azure/azure.go 0.72% <0%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c61f934...e7ee52c. Read the comment docs.

@adityadani
Copy link
Contributor

Can you separate out the vendor changes into a separate commit ?

@harsh-px
Copy link
Author

Can you separate out the vendor changes into a separate commit ?

Done

@harsh-px harsh-px force-pushed the update branch 2 times, most recently from 21808e8 to ab41a3c Compare August 22, 2019 05:10
@harsh-px
Copy link
Author

Added support for update storage config on azure and updated the azure UTs.

Also made a change so that the expected response will match user requested IOPS more closely.

Copy link
Contributor

@adityadani adityadani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only looked at the storage distribution changes.
Will check the vsphere changes shortly

cloud_storage_management.go Outdated Show resolved Hide resolved
cloud_storage_management.go Outdated Show resolved Hide resolved
cloud_storage_management.go Show resolved Hide resolved
pkg/storagedistribution/storagedistribution.go Outdated Show resolved Hide resolved
pkg/storagedistribution/storagedistribution.go Outdated Show resolved Hide resolved
pkg/storagedistribution/storagedistribution.go Outdated Show resolved Hide resolved
cloud_storage_management.go Outdated Show resolved Hide resolved
IOPS: row.IOPS,
}

if requiredDriveSize > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this is just my preference.

We should split this function into two functions.

  1. getStoragePoolForRequestedDriveSize
  2. getStoragePool

pkg/storagedistribution/storagedistribution.go Outdated Show resolved Hide resolved
pkg/storagedistribution/storagedistribution.go Outdated Show resolved Hide resolved
Copy link
Contributor

@adityadani adityadani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vsphere changes look good.
I still have some questions on the storage distribution changes.

DesiredCapacity uint64 `json:"new_capacity" yaml:"new_capacity"`
// ResizeOperationType is the operation user wants for the storage resize on the node
ResizeOperationType api.SdkStoragePool_ResizeOperationType
// CurrentDriveCount is the current number of drives in the storage pool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this is the current number of drives on the node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be we need two counts. One for the StoragePool we are updating and one for total no. of drives on the node.

filteredRows = utils.SortByIOPS(filteredRows)
// Filter out rows which have lower IOPS than required
var index int
for index = 0; index < len(filteredRows); index++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this for is not needed as requiredIOPS is 0 in this else condition.

filteredRows = utils.SortByPriority(filteredRows)
}

if len(requiredDriveType) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this duplicate if.

}

// additional check so that we are not overprovisioning
for instancesPerZone > 1 && instStorage.DriveCapacityGiB*instancesPerZone > minCapacityPerZone {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this?
Can we just check if the cluster capacity is less than maxCapacityPerZone otherwise the outer for loop already reduces the instancesPerZone and recalculates.

Also you should consider DriveCount as well instStorage.DriveCapacityGiB*instancesPerZone*instStorage.DriveCount

}

} else {
logrus.Debugf("required capacity: %d GiB is lower than row's min size: %d GiB",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not try the next candidate row / reduce the instances per zone and retry the same row?

remainingCapacity := int(requiredCapacity)
for remainingCapacity > 0 {
remainingCapacity -= int(requiredDriveSize)
instStorage.DriveCount++
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check if instStorage.DriveCount < row.InstanceMaxDrives

// check in matrix if this drive can be resized to newMinCapacityPerDrive
newMinCapacityPerDrive := request.CurrentDriveSize + newMinDeltaCapacityPerDrive
logrus.Debugf("need to resize drive to atleast: %d GiB", newMinCapacityPerDrive)
instStorage, err := instanceStorageForRow(row, newMinCapacityPerDrive, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instanceStorageForRow also increments the DriveCount, but this switch case only wants to resize the disk.
Probably we don't need to check the matrix, but just find the row for the current instStorage and check if resizing the drives is within row.MaxSize.

newSizeInKiB := int64(newSizeInGiB) * 1024 * 1024
if editDisk.CapacityInKB >= newSizeInKiB {
return uint64(editDisk.CapacityInKB / (1024 * 1024)), cloudops.NewStorageError(cloudops.ErrDiskGreaterOrEqualToExpandSize,
fmt.Sprintf("disk is already has a size: %d KiB greater than or equal "+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: fix error message.

Harsh Desai added 11 commits September 10, 2019 14:07
Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
- Change update storage API to storage pool
- Simplify logic to calculate update config
- misc renames

Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
Signed-off-by: Harsh Desai <[email protected]>
@harsh-px harsh-px closed this Nov 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants