-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example agents don't learn #56
Comments
the insane fluctuations here are probably from the fitness function in this scenario. |
I tried this crate on a simple doodle jump game but it appears to have little to no progress over time. Each generation is a bit slow as for each game i play around 400 frames, but i tried 1000 generations of 100 genomes and got really bad results. Im new to neat and neural networks in general so i might be doing something wrong but i followed the basic.rs example somewhat closely |
I think a lot of the reason why this isn't learning is because of the default topology and/or some issue with mutation. NEAT is supposed to take a long time to evolve, maybe try with a bigger population to see if it improves? Edit: also would be helpful to trace average fitness and stuff, I have a plotters example in the |
Im already doing that with let (sorted_genome, sort_duration) = time::timeit(|| sort_genomes(&sim));
// sorted_genome is Vec<(&agent::DNA, f32)>. f32 being the fitness
println!(
"Gen {} done, took {}\nResults: {:.0}/{:.0}/{:.0}. sorted in {}.",
i + 1,
time::format(stopwatch.read(), 1),
sorted_genome.first().unwrap().1,
sorted_genome.get(NB_GENOME_PER_GEN / 2).unwrap().1,
sorted_genome.last().unwrap().1,
time::format(sort_duration, 1)
); The results are that best is fluctuating between 900 and 1.5k without noticeable progress, median is fluctuating between 200 and 300 and worst is always at 0 Any score less than 2/4k is really bad as you can just launch the game, not move and get 1/2K I'll still give |
pub const NB_GAMES: usize = 3;
pub const GAME_TIME_S: usize = 20; // Nb of secconds we let the ai play the game before registering their scrore
pub const GAME_DT: f64 = 0.05; // 0.0166
pub const NB_GENERATIONS: usize = 100;
pub const NB_GENOME_PER_GEN: usize = 1000;
neat::NeuralNetworkTopology::new(0.1, 3, rng) |
@Bowarc One thing I forgot to mention is that this is an extremely raw NEAT implementation with no genetic optimizers present (eventually will be implemented, haven't gotten around to it). It can take millions of generations before any good mutations happen. For better results you can do something like very high mutation rate/passes settings that decay as generations increase, similar to how epsilon decay would work in a traditional neural net. The mutation rate and passes fields will almost definitely be changed/deprecated in the future in favor of a better api. |
High mutation rate + high popularion increase the chances of deadlocks. One of the issue also was my poor understanding of nn inputs, by fixing that I managed to get solid results with pub const NB_GAMES: usize = 3;
pub const GAME_TIME_S: usize = 20; // Nb of secconds we let the ai play the game before registering their scrore
pub const GAME_FPS: usize = 20; // 60
pub const GAME_DELTA_TIME: f64 = 1. / GAME_FPS as f64;
pub const NB_GENERATIONS: usize = 10_000;
pub const NB_GENOME_PER_GEN: usize = 5_000;
pub const MUTATION_RATE: f32 = 0.01;
pub const MUTATION_PASSES: usize = 3; In about 5/6k generation, the results are good. (The ai can somewhat play indefinitely) (If it can help anyone, the issue was that i wanted to map xy values between 0..1 but as it's a scrolling type game, using screen pos without game.scroll would makes value spin out of control / go negative) |
Unrelated to this crate, but typically delta time is not supposed to be constant, normally it’s measured and then multiplied with things like movements and animations to make a smooth game regardless of processing power. |
Yes ik, but if i want to fast forward, i must not compute dt based on real time. This is named dt because in games it's often called dt, in this case it's more like a timestep |
Will keep this issue open even though I got rid of the old examples during #80 because it will probably resurface |
Following a comment I made in #50, the examples' fitness graphs look like this:
This does not have any sort of upward trend at all, just looks like random data.
First change could be to try running these examples with a lot of generations to see if it is just statistical noise from not having enough generations.
If that is not the case, it is likely either an issue with the fitness function or a much larger core issue that needs solving.
The text was updated successfully, but these errors were encountered: