Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddpg pmsm dq current control array dimension problem #251

Closed
berkhizarci opened this issue Sep 13, 2024 · 2 comments
Closed

ddpg pmsm dq current control array dimension problem #251

berkhizarci opened this issue Sep 13, 2024 · 2 comments
Assignees

Comments

@berkhizarci
Copy link

When I tried to run the ddpg_pmsm_dq_current_control inside the examples I got the following error:

ValueError: Error when checking input: expected flatten_input to have shape (1, 7) but got array with shape (1, 2)

I fixed the wrong module version problems but I couldnt fix it, what do I need to do to fix it?

@bhk11
Copy link
Member

bhk11 commented Sep 16, 2024

Hello berkhizarci,
Unfortunately, the examples in the last version were not updated to the new gymnasium interface of the gym-electric-motor package.

I tried updating the interface in the wrapper (see the code), but that ended in another error for me. Maybe this change will help you:


class AppendLastActionWrapper(Wrapper):
    
   def __init__(self, environment):
        super().__init__(environment)
        # append the action space dimensions to the observation space dimensions
        self.observation_space = Tuple((Box(
            np.concatenate((environment.observation_space[0].low, environment.action_space.low)),
            np.concatenate((environment.observation_space[0].high, environment.action_space.high))
        ), environment.observation_space[1]))

    def step(self, action):

        (state, ref), rew, term, trun, info = self.env.step(action)

        # extend the output state by the selected action
        state = np.concatenate((state, action))

        return (state, ref), rew, term, info

    def reset(self, **kwargs):

        (state, ref), info = self.env.reset()

        # extend the output state by zeros after reset
        # no action can be appended yet, but the dimension must fit
        state = np.concatenate((state, np.zeros(self.env.action_space.shape)))

        return state, ref

Another workaround could be to install an older package, e.g. pip install gym-electric-motor==1.1.0, which uses the classic Gym interface.

The reason I am keeping my effort low to solve your problem is that the rl lib used, keras-rl, has not been maintained for a long time and therefore does not support the latest gym/gymnasium interface, which gym-electric-motor uses. This can lead to errors when maintaining this example. In addition, we are currently working on a new version with working examples, in which this example will be replaced by an stable baselines 3 ddpg.

@bhk11 bhk11 self-assigned this Sep 16, 2024
@bhk11
Copy link
Member

bhk11 commented Nov 7, 2024

examples updated in the nightly with pull request #253

@bhk11 bhk11 closed this as completed Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants