Dynamic Classifier Selection Ensembles in Python

Dynamic classifier selection is a type of ensemble learning algorithm for classification predictive modeling. The technique involves fitting multiple machine learning models on the training dataset, then selecting the model that is expected to perform best when making a prediction, based on the specific details of the example to be predicted. This can be achieved using a k-nearest neighbor model to locate examples in the training dataset that are closest to the new example to be predicted, evaluating all models […]

Read more

Machine Translation Weekly 62: The EDITOR

Papers about new models for sequence-to-sequence modeling have always been my favorite genre. This week I will talk about a model called EDITOR that was introduced in a pre-print of a paper that will appear in the TACL journal with authors from the University of Maryland. The model is based on the Levenshtein Transformer, a partially non-autoregressive model for sequence-to-sequence learning. Autoregressive models generate the output left-to-right (or right-to-left), conditioning each step on the previously generated token. On the other […]

Read more

Python: Check if Key Exists in Dictionary

Introduction Dictionary (also known as ‘map’, ‘hash’ or ‘associative array’) is a built-in Python container that stores elements as a key-value pair. Just like other containers have numeric indexing, here we use keys as indexes. Keys can be numeric or string values. However, no mutable sequence or object can be used as a key, like a list. In this article, we’ll take a look at how to check if a key exists in a dictionary in Python. In the examples, […]

Read more

Calculating Pearson Correlation Coefficient in Python with Numpy

Introduction This article is an introduction to the Pearson Correlation Coefficient, its manual calculation and its computation via Python’s numpy module. The Pearson correlation coefficient measures the linear association between variables. Its value can be interpreted like so: +1 – Complete positive correlation +0.8 – Strong positive correlation +0.6 – Moderate positive correlation 0 – no correlation whatsoever -0.6 – Moderate negative correlation -0.8 – Strong negative correlation -1 – Complete negative correlation We’ll illustrate how the correlation coefficient varies […]

Read more

Automatic Standardization of Colloquial Persian

The Iranian Persian language has two varieties: standard and colloquial. Most natural language processing tools for Persian assume that the text is in standard form: this assumption is wrong in many real applications especially web content… This paper describes a simple and effective standardization approach based on sequence-to-sequence translation. We design an algorithm for generating artificial parallel colloquial-to-standard data for learning a sequence-to-sequence model. Moreover, we annotate a publicly available evaluation data consisting of 1912 sentences from a diverse set […]

Read more

‘Seeing’ on tiny battery-powered microcontrollers with RNNPool

Computer vision has rapidly evolved over the past decade, allowing for such applications as Seeing AI, a camera app that describes aloud a person’s surroundings, helping those who are blind or have low vision; systems that can detect whether a product, such as a computer chip or article of clothing, has been assembled correctly, improving quality control; and services that can convert information from hard-copy documents into a digital format, making it easier to manage personal and business data. All […]

Read more

Globetrotter: Unsupervised Multilingual Translation from Visual Alignment

Multi-language machine translation without parallel corpora is challenging because there is no explicit supervision between languages. Existing unsupervised methods typically rely on topological properties of the language representations… We introduce a framework that instead uses the visual modality to align multiple languages, using images as the bridge between them. We estimate the cross-modal alignment between language and images, and use this estimate to guide the learning of cross-lingual representations. Our language representations are trained jointly in one model with a […]

Read more

Random Forest for Time Series Forecasting

Random Forest is a popular and effective ensemble machine learning algorithm. It is widely used for classification and regression predictive modeling problems with structured (tabular) data sets, e.g. data as it looks in a spreadsheet or database table. Random Forest can also be used for time series forecasting, although it requires that the time series dataset be transformed into a supervised learning problem first. It also requires the use of a specialized technique for evaluating the model called walk-forward validation, […]

Read more

Curve Fitting With Python

Curve fitting is a type of optimization that finds an optimal set of parameters for a defined function that best fits a given set of observations. Unlike supervised learning, curve fitting requires that you define the function that maps examples of inputs to outputs. The mapping function, also called the basis function can have any form you like, including a straight line (linear regression), a curved line (polynomial regression), and much more. This provides the flexibility and control to define […]

Read more

Stochastic Hill Climbing in Python from Scratch

Stochastic Hill climbing is an optimization algorithm. It makes use of randomness as part of the search process. This makes the algorithm appropriate for nonlinear objective functions where other local search algorithms do not operate well. It is also a local search algorithm, meaning that it modifies a single solution and searches the relatively local area of the search space until the local optima is located. This means that it is appropriate on unimodal optimization problems or for use after […]

Read more
1 4 5 6 7 8 19