Articles About Machine Learning

The Chain Rule of Calculus for Univariate and Multivariate Functions

The chain rule allows us to find the derivative of composite functions. It is computed extensively by the backpropagation algorithm, in order to train feedforward neural networks. By applying the chain rule in an efficient manner while following a specific order of operations, the backpropagation algorithm calculates the error gradient of the loss function with respect to each weight of the network.  In this tutorial, you will discover the chain rule of calculus for univariate and multivariate functions. After completing […]

Read more

The Chain Rule of Calculus – Even More Functions

The chain rule is an important derivative rule that allows us to work with composite functions. It is essential in understanding the workings of the backpropagation algorithm, which applies the chain rule extensively in order to calculate the error gradient of the loss function with respect to each weight of a neural network. We will be building on our earlier introduction to the chain rule, by tackling more challenging functions.  In this tutorial, you will discover how to apply the […]

Read more

A Gentle Introduction To Approximation

When it comes to machine learning tasks such as classification or regression, approximation techniques play a key role in learning from the data. Many machine learning methods approximate a function or a mapping between the inputs and outputs via a learning algorithm. In this tutorial, you will discover what is approximation and its importance in machine learning and pattern recognition. After completing this tutorial, you will know:  What is approximation Importance of approximation in machine learning Let’s get started.

Read more

A Gentle Introduction to Taylor Series

A Gentle Introduction to Taylor Series Taylor series expansion is an awesome concept, not only the world of mathematics, but also in optimization theory, function approximation and machine learning. It is widely applied in numerical computations when estimates of a function’s values at different points are required. In this tutorial, you will discover Taylor series and how to approximate the values of a function around different points using its Taylor series expansion. After completing this tutorial, you will know: Taylor […]

Read more

Calculus in Action: Neural Networks

An artificial neural network is a computational model that approximates a mapping between inputs and outputs.  It is inspired by the structure of the human brain, in that it is similarly composed of a network of interconnected neurons that propagate information upon receiving sets of stimuli from neighbouring neurons. Training a neural network involves a process that employs the backpropagation and gradient descent algorithms in tandem. As we will be seeing, both of these algorithms make extensive use of calculus. […]

Read more

A Gentle Introduction To Sigmoid Function

Whether you implement a neural network yourself or you use a built in library for neural network learning, it is of paramount importance to understand the significance of a sigmoid function. The sigmoid function is the key to understanding how a neural network learns complex problems. This function also served as a basis for discovering other functions that lead to efficient and good solutions for supervised learning in deep learning architectures. In this tutorial, you will discover the sigmoid function […]

Read more

Lagrange Multiplier Approach with Inequality Constraints

In a previous post, we introduced the method of Lagrange multipliers to find local minima or local maxima of a function with equality constraints. The same method can be applied to those with inequality constraints as well. In this tutorial, you will discover the method of Lagrange multipliers applied to find the local minimum or maximum of a function when inequality constraints are present, optionally together with equality constraints. After completing this tutorial, you will know How to find points […]

Read more

A Gentle Introduction to Particle Swarm Optimization

Particle swarm optimization (PSO) is one of the bio-inspired algorithms and it is a simple one to search for an optimal solution in the solution space. It is different from other optimization algorithms in such a way that only the objective function is needed and it is not dependent on the gradient or any differential form of the objective. It also has very few hyperparameters. In this tutorial, you will learn the rationale of PSO and its algorithm with an […]

Read more

Training-validation-test split and cross-validation done right

One crucial step in machine learning is the choice of model. A suitable model with suitable hyperparameter is the key to a good prediction result. When we are faced with a choice between models, how should the decision be made? This is why we have cross validation. In scikit-learn, there is a family of functions that help us do this. But quite often, we see cross validation used improperly, or the result of cross validation not being interpreted correctly. In […]

Read more

How to Learn Python for Machine Learning

Python has become a de facto lingua franca for machine learning. It is not a difficult language to learn, but if you are not particularly familiar with the language, there are some tips that can help you learn faster or better. In this post, you will discover what the right way to learn a programming language is and how to get help. After reading this post, you will know: The right mentality to learn Python for use in machine learning […]

Read more
1 6 7 8 9 10 215