Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve model selection during MRsort learning #17

Open
jacquev6 opened this issue Apr 7, 2023 · 1 comment
Open

Improve model selection during MRsort learning #17

jacquev6 opened this issue Apr 7, 2023 · 1 comment
Assignees
Labels
Domain: MRSort Status: idea An idea with no concrete plan yet
Milestone

Comments

@jacquev6
Copy link
Member

jacquev6 commented Apr 7, 2023

Currently when learning an MRsort model, a population of intermediate models is trained in parallel. After each training iteration, the best half of this population is kept, and the worst half is reinitialized to randomized states. Admittedly, these random states are created using a clever non-uniform distribution, but this seems like a waste of information.

We could:

  • duplicate some of the best models
  • use "genetics" to "breed" models
  • use many techniques of genetic programming
@jacquev6 jacquev6 added Domain: MRSort Status: idea An idea with no concrete plan yet labels Apr 7, 2023
@LaurentCabaret
Copy link
Collaborator

Sure!
Some ideas in return:

  • Is it possible to trace the life of a model (with a unique label corresponding to the current run)?
    If so, it would be interesting to show whether there is little or no improvement from the random models.
  • A quick modification might be to simply duplicate the surviving half on the worse half. If the theory is right this should quickly extinguish the diversity and "reduce" the rate of progress (compared to an optimal solution which does not yet exist).
  • A more complicated modification would be to use the improvement part of the profiles as a mutation point. This is where we have to make meaningful choices based on a random generator (and this seems contradictory to me). Taking advantage of these bifurcation points as diversification tools seems interesting to me (we make several draws at these points). But It comes back to the previous idea (limiting the randomness).
  • It will thus be necessary to add randomness but perhaps between these two levels of subtleties (full random vs some points of improvement)

@jacquev6 jacquev6 added this to the Later milestone Nov 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Domain: MRSort Status: idea An idea with no concrete plan yet
Projects
None yet
Development

No branches or pull requests

2 participants