Forte Specialization: Machine learning

By Frank Forte

Abstract
This article discusses a potential method to increase the efficiency of a neural network in processing information. More specifically, the use of synaptic plasticity in order to optimize data flows between neurons in a convolutional neural network.

Introduction
When building a convolutional neural network for machine learning, it is common to hard code linkages between neurons, where data is passed through each layer of neurons, and back propagation is used to help each layer in the network learn the weights that should be assigned to each input feature so as to predict a more accurate outcome.1

Choosing the number of neurons and layers in a neural network could have some effect, however, it might be more effective to allow each neuron to independently determine the priority of each input. More specifically – if more than one input provides the same information, it could choose the input that provides the information first (be more sensitive to that input from the previous layer), and give a lower priority to the slower input (using neural fatigue, in a sense). The priority given to each input can determine the strength of the linkage between the neuron and its input. If the strength is below a certain threshold, the linkage can be severed, and thus, less data must be processed by the individual neuron. In fact, this could lead to “specialization”, where certain neurons that are closer to specific stimuli (types of data) become better at “knowing” about that data. This is similar to the way parts of our brains can be trained to interpret audio or visual information.
This concept may give a turbo boost to parallel processing. Each process can be fine tuned to get excited by specific “frequencies” of data.

Methods
Each neuron would keep track of each input. When an event occurs (for example, data from an image is sent for processing) the neuron would assign the input values to the appropriate inputs with a time stamp. It could then compare inputs, and assign a priority to inputs based on how unique their information is, and how quickly the information was received.
Should an input’s priority fall below a specified threshold, the neuron could send a “slow down” or “terminate” signal to the input, telling it to send data at a lower frequency or to stop sending data altogether.

If an input neuron no longer has any upstream link, it can effectively die, and send a terminate signal back to all of it’s own inputs.
Should the priority of many inputs be high on the other hand, the neuron might consider making new linkages with the next layer of neurons, since it is receiving a lot of high priority signals.

Keep in mind that the priority is independent of the weights of features that are being learned. The point of “Forte Specialization” is not to determine which features are important, but rather, which inputs are important.

Results and Discussion
If and when I get around to it, I intend on implementing the above logic by writing code and running some tests and benchmarks to see what we might learn. If you are interested in collaborating with me, please get in touch.

Works Cited

Aside from the concepts related to neural plasticity and input priority, credit goes to Geoffrey Hinton of the University of Toronto and Andrew Ng of Stanford who both published excellent machine learning courses online, describing neural networks and their implementations.

Copyright 2017 Frank Forte. All rights reserved. Unauthorized reproduction of this work are prohibited.



This entry was posted on Saturday, September 30th, 2017 at 11:53 pm and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.