NumberVision – Machine Learning Application
The next way we used our Neuron class was to classify digits. Using a UI, our program was assigned to take an input digit (a 5x7 grid, each box has a shade of grey between black and white) and classify the digit as a number between 0 and 9.
A Smarter Neuron
Graphical depiction of a Neuron receiving more than 2 inputs. |
Previously, I had only been using a Neuron with two inputs: an X and Y coordinate. Previously, this allowed me to plot my points on a plane like I did with the Half Moon project. However, due to the 5x7 grid of inputs I was getting, I had to alter my Neuron class to be able to take in 35 inputs. Mostly, this involved changing a few numbers.
Applying the Neuron to NumberVision
The digit '1', with progressively more noise added from left to right. |
Good algorithms should be able to generate a result even if the input is a little distorted. I applied the same goal to NumberVision. The flow chart above shows various levels of added distortion, or noise. At no extra noise, the digit is easily recognizable. At the fifth iteration of noise, even a human would have a tough time identifying the number.
In order for my Neuron to be able to recognize a number, even with noise in the input, I trained it with a noise-y set of inputs. Each digit had 30 variations, totalling 300 inputs.
However, it's impossible to train a single neuron to recognize any of ten digits, as each Neuron can only have one of two outputs: yes or no. So, since there are 10 digits, I created 10 Neurons, each trained on a different digit. Then, to recognize a digit, the program runs the input through each of the Neurons. If trained correctly, only one Neuron will trigger, and there you have your digit.
Comments
Post a Comment