You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Train and improve the LSTM model.
The LSTM model is supposed to give the LLM the correct strength of the user on a given workout. Thus the LLM can create an appropriate set based on the LSTM models prediction of the user strength that day.
To do this accurately the model has to be trained on the historical workouts of the user, but to predict accurately the future strength (and progress) multiple features and proper data-handling of 0 values (which there are a lot) are necessary.
For now we've been doing linear-interpolation between the workouts to fill in the 0 values for the days where the user didn't work out, so the LSTM model, currently at 50 neurons, 1 layer, doesn't get confused with 0's and then some spikes to 90kg for example. This gives a consistent output and prediction, but it's more than likely very wrong (although usually within 10-15%).
It's similar to just using 2nd order regression.
Suggestions:
Increase the dataset by training on all exercises.
Include more features, like multiple exercises, total volume per workout, days since last workout.
Can also interpolate with 2nd or 3rd degree function instead of linear to get better results
Get more data from online. Training on 2 or more people's historical workout data.
Increase or change the network topology
The text was updated successfully, but these errors were encountered:
Train and improve the LSTM model.
The LSTM model is supposed to give the LLM the correct strength of the user on a given workout. Thus the LLM can create an appropriate set based on the LSTM models prediction of the user strength that day.
To do this accurately the model has to be trained on the historical workouts of the user, but to predict accurately the future strength (and progress) multiple features and proper data-handling of 0 values (which there are a lot) are necessary.
For now we've been doing linear-interpolation between the workouts to fill in the 0 values for the days where the user didn't work out, so the LSTM model, currently at 50 neurons, 1 layer, doesn't get confused with 0's and then some spikes to 90kg for example. This gives a consistent output and prediction, but it's more than likely very wrong (although usually within 10-15%).
It's similar to just using 2nd order regression.
Suggestions:
The text was updated successfully, but these errors were encountered: