Weights And Biases In Neural Networks Simplified! #33
Used a real-world analogy to explain them!
Hello Everyone,
Welcome to the 33rd edition of my newsletter ML & AI Cupcakes!
Weights and biases are the core learnable parameters in neural networks. Learnable parameters means their values is calculated through the training process.
Sounds complicated?
Don’t worry!
Let’s understand them with the help of a real-world analogy!
Imagine, you are learning to prepare a dish. Your goal is to create a perfect dish (prediction) from the several ingredients (inputs) available. These ingredients include salt, pepper, oil, vegetables, chillies, onion, garlic etc. Each ingredient is like an input to your dish. It is samilar to having features (size, number of rooms, locality etc) as input for a neural network.
Let’s see how this analogy relates to the concept of weights and biases in neural networks!
Weights
You don’t mix all the ingredients blindly. You carefully decide how much quantity of a particular ingredient should be used to prepare a particular dish.
For example, If you are making a spicy dish, you will add high quantity of chillies. Otherwise, you’ll keep it to moderate quantity.
Weights serve exactly the same purpose in neural networks. They help you decide how much strength or importance should be given to a particular feature based on the problem you are trying to solve.
Mathematically, the equation looks like this:
Each input feature is first multiplied with a corresponding weight. After that, it contributes to the final decision.
The value of these weights are iteratively adjusted during the training process to figure out what works best. It is similar to tasting our recepie multiple times and adjusting the ingredents until a particular taste is gained.
Bias
Now, every chef has a signature style they apply to everything. It is independent of the recipe they are cooking. It’s just to add a personalized taste to the final dish. For example, adding a bit of lemon juice at the end to give the dish a fresh and tangy flavour.
Bias plays the same role for neural networks. It allows to make some adjustments in the model independent of the input features. It makes a little shift into the weighted sum before passing it to the activation function.
Mathematically, it looks like this,
Even if all the input features are zero, the model can still give some prediction. It is like even if you are not using any ingredients, there is still lemon juice available to serve.
How weights and biases are updated?
Image credit: Author
Like we keep tasting and adjusting our ingredients until a desired taste in the final dish is acquired, neural network does the same with weights and biases.
The value of weights and biases are iteratively adjusted during training process. The process of adjusting them is called backpropagation.
The objective of backpropagation is to find those values of weights and biases that helps to reduce the error between actual and predicted results.
Below is the summarized table for your final understanding:
Hope it helps!
Quick Knowledge Check!
Please refer to these links for basic understanding of neural networks:
https://kavitagupta.substack.com/p/understanding-the-layers-of-neural
https://kavitagupta.substack.com/p/the-most-fundamental-unit-of-a-neural
https://kavitagupta.substack.com/p/epochs-and-iterations-in-neural-networks
https://kavitagupta.substack.com/p/core-elements-of-a-neural-network
Writing each newsletter takes a lot of research, time and effort. Just want to make sure it reaches maximum people to help them grow in their AI/ML journey.
It would be great if you could share this newsletter with your network. Also, please let me know your feedbacks and suggestions in the comments section. That will help me keep going. Even a “like” on my posts will tell me that my posts are helpful to you.
See you soon!
-Kavita
Wow. You are born to be a very great teacher, doctor! Your recipe analogy to neural networks is top-notch and superb. Thanks for the explanation!
You made my day!!