-
Notifications
You must be signed in to change notification settings - Fork 4
3.3 Debug guide
This is a guide on how to set up the debug environment for this project as well as how to debug multiple components at the same time. You should have installed all required tools recommended inside the Developer installation guide before beginning this guide.
Open Visual Studio and select the BlazorBoilerplate.sln file as "open a project or solution" source. Select the BlazorBoilerplate.Server project in the top bar next to the debug buttons:
Info: if this is the first time starting the application, it might take a while before the web page is displayed. Blazor is setting up the local ML SQL database.
After a while a browser with the login page will open automatically:
You can log in using the default user:
Name: user
Password: user123
Info: You need to also start the Controller project and MongoDB server. Without the Controller, the Frontend can retrieve information from the Controller and several errors will occur.
Each component within the Backend can be opened up individually using Visual Studio Code and the "open folder" option.
Open Visual Studio Code and open the controller folder.
Next, open a new Terminal, Menu: Terminal->New Terminal. This will open a new Terminal at the bottom of VS Code. Check the path of the prompt that you are within the controller folder:
Execute the command: python -m venv .venv This will create a new virtual environment within the controller folder which will be used to install dependencies and debug the controller. Now you should be able to see a new folder: .venv:
VS Code will display a popup if you want to use the newly created venv for debugging, press yes. Now we are located within said venv, which we can check looking at the bottom right of VS Code:
In case nothing with Python is displayed, manually select any Python file first within VS Code. In case you only see your current Python version without any venv annotation like in the screenshot, press the python number, and VS Code will prompt you to choose between all available environments:
Now select the one with the venv path.
Finally, open up a new Terminal. The new Terminal will use the newly created venv. If a terminal is located within the venv, it will display the name of the venv as a prefix before the prompt:
Now all required packages can be installed, by executing the command: pip install -r .\requirements.txt
This process might take a while. In case you receive an error message with "check the permissions", reopen VS Code with Administrator privilege and re-execute the dependencies installation. (Don´t forget to check that the terminal is using the venv!)
Switching to the debug menu in VS Code, the controller can be debugged:
Info: Every backend component has a preconfigured debug profile for VS Code. When opening the debug menu in VS Code, the component name will be recognized automatically.
Open Visual Studio Code and open the AutoML folder for the specific AutoML you want to set up/debug. The specific implementation of each AutoML Adapter can be found within the folder "adapter".
Example: If you want to set up the AutoCVE Adapter open the folder: "/adapters/AutoCVE"
Follow the instruction for setting up a virtual environment from the controller component above. But instead of the controller folder, check that the terminal is within the respective AutoML adapter folder.
Within each adapter the training-path variable inside the development config must be updated. It must point to the local system path where the MetaAutoML folder is located: config_development.json
Follow the instruction for debugging the controller from above.
Debugging the entire OMA-ML project can be accomplished by executing the Frontend within Visual Studio and every Backend component with a separate instance of Visual Studio Code opened in their respective folder. It is important to note that debugging all AutoML Adapters at the same time might require a lot of resources, therefore only recommended for strong multicore systems.
If this is not possible, it is also viable to only execute one AutoML solution. The frontend will display every AutoML Adapter as failed in such a case, but the running AutoML will not be influenced by that.