Can be complicated sometimes, eh? Let’s make it super simple for you!

Structure of neural networks

– Input neurons represents the information we are trying to classify.

– Each number in the input neurons is given a weight at each synapse

– At each neuron in the next layer, we add the outputs of all synapses coming to that neuron along with a bias and apply an activation function to the weighted sum (this number is somewhere between 0 and 1)

– Each yellow node in the hidden layer is a weighted sum of the blue input node values. The output is a weighted sum of the yellow nodes.

– The output of that function will be treated as the input for the next synapse layer.

– Continue until you reach the output.

Some good YouTube videos to understand this:

(1) https://lnkd.in/dCeAQBm

(2) https://lnkd.in/dckaczW

The videos also contain good examples that will help you understand the concept better.

Feed forward neural networks!

– These were the first type of neural networks invented, are usually simpler than other networks.

– They just go in one direction and the connection between the units do not form a cycle.

– Use cases : Mostly in Supervised learning (where the data to be learned is neither sequential nor time-dependent).

Single-Layer Perceptrons

– Simplest type of feedforward neural network.

– They have no hidden units.

– The output units are computed directly from the sum of the product of their weights with the corresponding input units, plus some bias.

– The single-layer perceptrons are linear classifiers, hence they can only learn linearly separable patterns.

Multi-Layer Perceptron (MLP)

– MLP is composed of many perceptrons.

– Layers in MLPs comprise of an input layer, some number (or zero) of hidden layers, and an output layer.

– Unlike single-layer perceptrons, MLPs are capable of learning to compute non-linearly separable functions.

– Use cases: One of the primary machine learning techniques for both regression and classification in supervised learning.

YouTube suggestion: https://lnkd.in/dEnSkBV

Let’s meet in the next blog for parameter initialisation and gradient descent!

Hope you learnt something new, please ask any questions you have, and I will love to answer them.

## Leave a Reply