From 1b9ed5fd368c4e139782184f9960cf93800e60ae Mon Sep 17 00:00:00 2001 From: Jethro Gaglione Date: Sun, 10 Mar 2024 17:33:53 -0500 Subject: [PATCH] HPO.md link fix --- docs/tutorials/HPO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/HPO.md b/docs/tutorials/HPO.md index 1e836cd..db7f6f3 100644 --- a/docs/tutorials/HPO.md +++ b/docs/tutorials/HPO.md @@ -8,7 +8,7 @@ Hyperparameter Optimization with Optuna ====================== In place of grid or random search approaches to HPO, we recommend the use of the Optuna framework for Bayesian hyperparameter sampling and trial pruning (in models where intermediate results are available). Optuna can also integrate with MLflow for convinient logging of optimal parameters. -In this tutorial, we take the model and training approach detailed in the [Single-GPU Training (Custom Mlflow)](({% link pytorch_singlGPU_customMLflow.md %}) tutorial to build our HPO on. +In this tutorial, we take the model and training approach detailed in the [Single-GPU Training (Custom Mlflow)](https://docs.mltf.vu/tutorials/pytorch_singlGPU_customMLflow.html) tutorial to build our HPO on. First, we install the Optuna package: ```bash