Using Normalization Layers to Improve Deep Learning Models

You’ve probably been told to standardize or normalize inputs to your model to improve performance. But what is normalization and how can we implement it easily in our deep learning models to improve performance? Normalizing our inputs aims to create a set of features that are on the same scale as each other, which we’ll explore more in this article. Also, thinking about it, in neural networks, the output of each layer serves as the inputs into the next layer, […]

Read more

Overview of Some Deep Learning Libraries

Machine learning is a broad topic. Deep learning, in particular, is a way of using neural networks for machine learning. A neural network is probably a concept older than machine learning, dating back to the 1950s. Unsurprisingly, there were many libraries created for it. The following aims to give an overview of some of the famous libraries for neural networks and deep learning. After finishing this tutorial, you will learn: Some of the deep learning or neural network libraries The […]

Read more

Using Autograd in TensorFlow to Solve a Regression Problem

We usually use TensorFlow to build a neural network. However, TensorFlow is not limited to this. Behind the scenes, TensorFlow is a tensor library with automatic differentiation capability. Hence you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how TensorFlow’s automatic differentiation engine, autograd, works. After finishing this tutorial, you will learn: What is autograd in TensorFlow How to make use of autograd and an optimizer to solve an […]

Read more

Three Ways to Build Machine Learning Models in Keras

If you’ve looked at Keras models on Github, you’ve probably noticed that there are some different ways to create models in Keras. There’s the Sequential model, which allows you to define an entire model in a single line, usually with some line breaks for readability. Then, there’s the functional interface that allows for more complicated model architectures, and there’s also the Model subclass which helps reusability. This article will explore the different ways to create models in Keras, along with […]

Read more

How to Checkpoint Deep Learning Models in Keras

Deep learning models can take hours, days, or even weeks to train. If the run is stopped unexpectedly, you can lose a lot of work. In this post, you will discover how to checkpoint your deep learning models during training in Python using the Keras library. Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. Jun/2016: First published Update Mar/2017: Updated for Keras […]

Read more

Using Activation Functions in Neural Networks

Activation functions play an integral role in neural networks by introducing nonlinearity. This nonlinearity allows neural networks to develop complex representations and functions based on the inputs that would not be possible with a simple linear regression model. Many different nonlinear activation functions have been proposed throughout the history of neural networks. In this post, you will explore three popular ones: sigmoid, tanh, and ReLU. After reading this article, you will learn: Why nonlinearity is important in a neural network […]

Read more

A Gentle Introduction to the tensorflow.data API

When you build and train a Keras deep learning model, you can provide the training data in several different ways. Presenting the data as a NumPy array or a TensorFlow tensor is common. Another way is to make a Python generator function and let the training loop read data from it. Yet another way of providing data is to use tf.data dataset. In this tutorial, you will see how you can use the tf.data dataset for a Keras model. After finishing […]

Read more

Understanding the Design of a Convolutional Neural Network

Convolutional neural networks have been found successful in computer vision applications. Various network architectures are proposed, and they are neither magical nor hard to understand. In this tutorial, you will make sense of the operation of convolutional layers and their role in a larger convolutional neural network. After finishing this tutorial, you will learn: How convolutional layers extract features from an image How different convolutional layers can stack up to build a neural network Let’s get started. Understanding the design […]

Read more

Loss Functions in TensorFlow

The loss metric is very important for neural networks. As all machine learning models are one optimization problem or another, the loss is the objective function to minimize. In neural networks, the optimization is done with gradient descent and backpropagation. But what are loss functions, and how are they affecting your neural networks? In this post, you will learn what loss functions are and delve into some commonly used loss functions and how you can apply them to your neural […]

Read more

Image Augmentation with Keras Preprocessing Layers and tf.image

When you work on a machine learning problem related to images, not only do you need to collect some images as training data, but you also need to employ augmentation to create variations in the image. It is especially true for more complex object recognition problems. There are many ways for image augmentation. You may use some external libraries or write your own functions for that. There are some modules in TensorFlow and Keras for augmentation too. In this post, […]

Read more
1 2 3 12