You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've used this for demoing to my customer and I think it would great to show how the azure-pipelines can be used to deploy to higher environments using the recommended approach of "compile once promote everywhere" off of the main branch.
As I am new to ML Ops, I'm not sure the recommended approach for deploying to higher environments
Should the training be part of the "compile once" continuous integration/build phase
It's a very pertinent question and, if I understand it correctly, my answer is "it depends".
For teams starting with MLOps, I'd say that the training is part of the inner loop (DEV env) and only the model artefact and scoring script are moved to the outer loop (TEST, pred-PROD, etc).
For more advance teams, you might want to consider auto-retraining functionalities also pushed to PROD, which would enable model retraining based on some event trigger (data drift. model drift, etc).
Hi,
I've used this for demoing to my customer and I think it would great to show how the azure-pipelines can be used to deploy to higher environments using the recommended approach of "compile once promote everywhere" off of the main branch.
As I am new to ML Ops, I'm not sure the recommended approach for deploying to higher environments
Should the training be part of the "compile once" continuous integration/build phase
dstoolkit-mlops-base/azure-pipelines/PIPELINE-1-modeling.yml
Line 76 in 322f451
and these pieces
dstoolkit-mlops-base/azure-pipelines/PIPELINE-1-modeling.yml
Line 104 in 322f451
The high level of what I'm trying to understand is how the batch inference and training pipeline should fit into this flow
The text was updated successfully, but these errors were encountered: