I have always been extremely curious about how consciousness works in the human brain – how do people lose memory and the overall functionality? Considering how little we know about the wiring of the human brain, and its connections to about 86 to 100 billion neurons, understanding patterns could sound like a herculean task. However this is exactly where Artificial Intelligence steps in.
But we need to dive deeper to know about neurons before getting AI involved. Our nervous systems detect what is going inside our bodies as well as our surroundings. On the basis of their detections, they decide how an individual needs to act. Also, the network memorizes the entire process simultaneously, that can be recalled later. For example, when we are about to fall, we might experience a slight increase in our heartbeat and then in a blink of an eye, we take an action involuntarily.
The entire process is reliant on a sophisticated network called neurons.
Our brains are far too complex for us to comprehend the objectivity behind each action. I had read a book called “The Psychopath Inside” by James Fallon, where he explains the brain in terms of a 3-by-3 Rubik’s cube (it’s still difficult to understand and visualise without prior knowledge). This is where AI jumps in, according to me, and can be employed in a number of ways. Using AI we can produce new tools or applications to come up with related connections to theoretical principles. This would help us understand the complex patterns of the sophisticated machine that our brain is. Imagine an interface?
At a rational level, AI can help us visualise the different patterns and try to find the correlations to underlying reasons. For example; certain types of event overlapping in the brain cause people to lose memory for a short duration of time – this can be visualised and studied using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) to recognize cell extensions, cell components, and synapses, and to distinguish them from each other.
CNNs and RNNs are different types of neural networks. RNNs are usually used with sequential data, as RNN has memory. RNN just doesn’t consider the input data, but also the previous input, which are both used to predict what’s going to happen next. That’s why they are often used in temporal analysis. CNNs are specialized neural networks for processing data that has an input shape like a 2D matrix like images. They often find use in image detection and classification, stock market prediction, etc.
During my study at KTH Royal Institute of Technology, Stockholm, we implemented Bayesian Confidence Propagation Neural Network (BCPNN) & sequence learning in a non-spiking attractor neural network.
To give you a better intuition about BCPNN, they are inspired by hebbian networks and bayes learning. The activation in the neural network represents confidence; which is the probability in presence of input features or categories. The synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology.
This was used to analyse the behaviour of non-orthogonal sequences as well as to implement sequential overlapping on the BCPNN model. Although the project was done in non-spiking conditions that are relatively abstract conditions, the research holds great importance in understanding the networks’ abilities to learn and recall. During our research we also presented that the proposed network model would enable encoding and may reproduce temporal aspects of the input. The model would offer internal controls of the recall dynamics by gaining modulation.
There’s so much about the brain we are yet to learn, and I am excited to learn and share more with you guys!
Ha en bra dag! (Have a nice day! – in Swedish)