Gentle Introduction to Statistical Language Modeling and Neural Language Models

Last Updated on August 7, 2019 Language modeling is central to many important natural language processing tasks. Recently, neural-network-based language models have demonstrated better performance than classical methods both standalone and as part of more challenging natural language processing tasks. In this post, you will discover language modeling for natural language processing. After reading this post, you will know: Why language modeling is critical to addressing tasks in natural language processing. What a language model is and some examples of […]

Read more

How to Develop an Encoder-Decoder Model for Sequence-to-Sequence Prediction in Keras

Last Updated on August 27, 2020 The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems such as machine translation. Encoder-decoder models can be developed in the Keras Python deep learning library and an example of a neural machine translation system developed with this model has been described on the Keras blog, with sample code distributed with the Keras project. This example can provide the basis for developing encoder-decoder LSTM models for your […]

Read more

How to Develop Word-Based Neural Language Models in Python with Keras

Last Updated on September 3, 2020 Language modeling involves predicting the next word in a sequence given the sequence of words already present. A language model is a key element in many natural language processing models such as machine translation and speech recognition. The choice of how the language model is framed must match how the language model is intended to be used. In this tutorial, you will discover how the framing of a language model affects the skill of […]

Read more

How to Develop a Character-Based Neural Language Model in Keras

Last Updated on September 3, 2020 A language model predicts the next word in the sequence based on the specific words that have come before it in the sequence. It is also possible to develop language models at the character level using neural networks. The benefit of character-based language models is their small vocabulary and flexibility in handling any words, punctuation, and other document structure. This comes at the cost of requiring larger models that are slower to train. Nevertheless, […]

Read more

How to Get Started with Deep Learning for Natural Language Processing

Last Updated on August 14, 2020 Deep Learning for NLP Crash Course. Bring Deep Learning methods to Your Text Data project in 7 Days. We are awash with text, from books, papers, blogs, tweets, news, and increasingly text from spoken utterances. Working with text is hard as it requires drawing upon knowledge from diverse domains such as linguistics, machine learning, statistical methods, and these days, deep learning. Deep learning methods are starting to out-compete the classical and statistical methods on […]

Read more

How to Use The Pre-Trained VGG Model to Classify Objects in Photographs

Last Updated on August 19, 2019 Convolutional neural networks are now capable of outperforming humans on some computer vision tasks, such as classifying images. That is, given a photograph of an object, answer the question as to which of 1,000 specific objects the photograph shows. A competition-winning model for this task is the VGG model by researchers at Oxford. What is important about this model, besides its capability of classifying objects in photographs, is that the model weights are freely […]

Read more

How to Develop a Word-Level Neural Language Model and Use it to Generate Text

Last Updated on September 3, 2020 A language model can predict the probability of the next word in the sequence, based on the words already observed in the sequence. Neural network models are a preferred method for developing statistical language models because they can use a distributed representation where different words with similar meanings have similar representation and because they can use a large context of recently observed words when making predictions. In this tutorial, you will discover how to […]

Read more

How to Automatically Generate Textual Descriptions for Photographs with Deep Learning

Last Updated on August 7, 2019 Captioning an image involves generating a human readable textual description given an image, such as a photograph. It is an easy problem for a human, but very challenging for a machine as it involves both understanding the content of an image and how to translate this understanding into natural language. Recently, deep learning methods have displaced classical methods and are achieving state-of-the-art results for the problem of automatically generating descriptions, called “captions,” for images. […]

Read more

How to Prepare a Photo Caption Dataset for Training a Deep Learning Model

Last Updated on August 7, 2019 Automatic photo captioning is a problem where a model must generate a human-readable textual description given a photograph. It is a challenging problem in artificial intelligence that requires both image understanding from the field of computer vision as well as language generation from the field of natural language processing. It is now possible to develop your own image caption models using deep learning and freely available datasets of photos and their descriptions. In this […]

Read more

How to Prepare Univariate Time Series Data for Long Short-Term Memory Networks

Last Updated on August 5, 2019 It can be hard to prepare data when you’re just getting started with deep learning. Long Short-Term Memory, or LSTM, recurrent neural networks expect three-dimensional input in the Keras Python deep learning library. If you have a long sequence of thousands of observations in your time series data, you must split your time series into samples and then reshape it for your LSTM model. In this tutorial, you will discover exactly how to prepare […]

Read more
1 759 760 761 762 763 858