Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI ES can beat CMA-ES on MNIST task #5

Open
IrmaVeoll opened this issue Oct 8, 2020 · 1 comment
Open

OpenAI ES can beat CMA-ES on MNIST task #5

IrmaVeoll opened this issue Oct 8, 2020 · 1 comment

Comments

@IrmaVeoll
Copy link

IrmaVeoll commented Oct 8, 2020

Instead of training the whole model let's optimize only a small randomly selected model part (e.g. 5% of connections) at each optimization step:

class OpenES:
    ...
    def ask(self):
        ...
        self.epsilon *= np.random.choice([0, 1], size=self.epsilon.shape, p=[0.95, 0.05]) # add this line 
        self.solutions = self.mu.reshape(1, self.num_params) + self.epsilon * self.sigma

        return self.solutions

With this simple modification, I got 99%/98.5% of accuracy on training/test sets (see the training log in my fork).

I don't have a good explanation for the phenomenon, but it looks like this method makes the exploitation component of the algorithm stronger.

Of course, this can't be used as a general approach, it's rather just an interesting (but probably specific to task/model/hyperparameters etc.) observation.

@hardmaru
Copy link
Owner

hardmaru commented Oct 9, 2020

Thanks for sharing. Interesting results!

Maybe you can write a blog post / short article about it if you find more useful cases for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants