Visualising MENACE's learning

In tonight's Royal Institution Christmas lecture, Hannah Fry and Matt Parker demonstrated how machine learning works using MENACE.
The copy of MENACE that appeared in the lecture was build and trained by me. During the training, I logged all the moved made by MENACE and the humans playing against them, and using this data I have created some visualisations of the machine's learning.
First up, here's a visualisation of the likelihood of MENACE choosing different moves as they play games. The thickness of each arrow represented the number of beads in the box corresponding to that move, so thicker arrows represent more likely moves.
The likelihood that MENACE will play each move.
There's an awful lot of arrows in this diagram, so it's clearer if we just visualise a few boxes. This animation shows how the number of beads in the first box changes over time.
The beads in the first box.
You can see that MENACE learnt that they should always play in the centre first, an ends up with a large number of green beads and almost none of the other colours. The following animations show the number of beads changing in some other boxes.
MENACE learns that the top left is a good move.
MENACE learns that the middle right is a good move.
MENACE is very likely to draw from this position so learns that almost all the possible moves are good moves.
The numbers in these change less often, as they are not used in every game: they are only used when the game reached the positions shown on the boxes.
We can visualise MENACE's learning progress by plotting how the number of beads in the first box changes over time.
The number of beads in MENACE's first box.
Alternatively, we could plot how the number of wins, loses and draws changes over time or view this as an animated bar chart.
The number of games MENACE wins, loses and draws.
The number of games MENACE has won, lost and drawn.
If you have any ideas for other interesting ways to present this data, let me know in the comments below.

Similar posts

Building MENACEs for other games
MENACE at Manchester Science Festival
MENACE in fiction


Comments in green were written by me. Comments in blue were not written by me.
@(anonymous): Have you been refreshing the page? Every time you refresh it resets MENACE to before it has learnt anything.

It takes around 80 games for MENACE to learn against the perfect AI. So it could be you've not left it playing for long enough? (Try turning the speed up to watch MENACE get better.)
I have played around menace a bit and frankly it doesnt seem to be learning i occasionally play with it and it draws but againt the perfect ai you dont see as many draws, the perfect ai wins alot more
@Colin: You can set MENACE playing against MENACE2 (MENACE that plays second) on the interactive MENACE. MENACE2's starting numbers of beads and incentives may need some tweaking to give it a chance though; I've been meaning to look into this in more detail at some point...
Idle pondering (and something you may have covered elsewhere): what's the evolution as MENACE plays against itself? (Assuming MENACE can play both sides.)
 Add a Comment 

I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>
To prove you are not a spam bot, please type "y-axis" in the box below (case sensitive):


Show me a random blog post

May 2020

A surprising fact about quadrilaterals
Interesting tautologies

Mar 2020

Log-scaled axes

Feb 2020

PhD thesis, chapter ∞
PhD thesis, chapter 5
PhD thesis, chapter 4
PhD thesis, chapter 3
Inverting a matrix
PhD thesis, chapter 2

Jan 2020

PhD thesis, chapter 1
Gaussian elimination
Matrix multiplication
Christmas (2019) is over
▼ show ▼
▼ show ▼
▼ show ▼
▼ show ▼
▼ show ▼
▼ show ▼
▼ show ▼
▼ show ▼


gerry anderson approximation cross stitch draughts platonic solids simultaneous equations dragon curves palindromes mathslogicbot determinants golden ratio exponential growth ternary gaussian elimination graphs weather station matrix of minors captain scarlet talking maths in public preconditioning sound squares folding paper pizza cutting graph theory manchester twitter electromagnetic field programming python arithmetic mathsjam martin gardner hats menace asteroids bempp wool game show probability christmas card error bars bubble bobble plastic ratio estimation wave scattering triangles finite element method geogebra data oeis geometry sport ucl statistics puzzles harriss spiral phd european cup the aperiodical countdown matt parker national lottery logs logic chebyshev polynomials reddit pac-man chess dates numerical analysis news fractals games radio 4 coins cambridge craft london london underground people maths quadrilaterals video games inverse matrices curvature raspberry pi binary rugby football flexagons propositional calculus php matrix of cofactors accuracy hannah fry books folding tube maps mathsteroids speed golden spiral inline code matrices computational complexity game of life data visualisation interpolation reuleaux polygons trigonometry signorini conditions javascript pythagoras big internet math-off go noughts and crosses light stickers boundary element methods bodmas tennis christmas braiding final fantasy chalkdust magazine dataset sorting tmip advent calendar rhombicuboctahedron map projections nine men's morris frobel misleading statistics matrix multiplication royal institution machine learning sobolev spaces weak imposition latex realhats convergence manchester science festival probability world cup a gamut of games hexapawn royal baby


Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2020