-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge reference with state #230
Comments
Additionally, the stable-baselines3 example code should be updated as it assumes a very outdated version of GEM. |
I am trying to run the stable-baselines3 example code and facing the same issue of observation space being a tuple. Can you kindly provide the edited code snippet because I am trying it since many days but could not resolve it. |
@RapheaSid in order to make the current GEM environment compatible to stable-baselines 3 you can apply a wrapper which changes the tuple into an array:
After you have defined your environment you can easily wrap it into this wrapper:
I hope that helps. If you got any further questions, feel free to ask. The next GEM version will probably change the state/reference observation tuple to a flat array. |
Currently, GEM returns a tuple containing of (state, reference) for each step and reset. This is incompatible with the standard RL library stable-baselines3 and might potentially be incompatible with other libraries. As, at least for reinforcement learning applications, it is always the case that state and reference have to be concatenated for the agent, both could be merged to one state. Additionally, for those interested in only core simulation, an option during initialization could be given to disable the reference generation (and thus reward calculation) to speed up the code.
The text was updated successfully, but these errors were encountered: