Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "ai"


Detecting and modeling outliers with PyOD

  • In statistics and Machine Learning, detecting outliers is a pivotal step, since they might affect the performance of your model.
  • One possible way of resolution might be, once detected all the outliers (we will see how to do so later on) simply removing them or, a bit more sophisticatedly, substituting them with the mean or median value of that feature.
  • For example, here we can compute the Euclidean distance between our green observation and some nearest observations (the number of those latter is a value we have to set before training the model, often indicated with K).
  • Now let’s train and make predictions with our KNN model.
  • As anticipated, once properly detected our outliers, we have to decide how to deal with them or, in other words, how to incorporate the information contained in them.
  • Modeling outliers is far from being easy and it is an open topic in statistics and data science.

save | comments | report | share on


Data Preprocessing in Data Mining & Machine Learning

  • While some irrelevant and redundant attributes can be eliminated immediately by using common sense or domain knowledge, selecting the best subset of features frequently requires a systematic approach.
  • The ideal approach to feature selection is to try all possible subsets of features as input to the data mining algorithm of interest, and then take the subset that produces the best results.
  • Feature selection occurs naturally as part of the data mining algorithm.
  • Specifically, during the operation of the data mining algorithm, the algorithm itself decides which attributes to use and which to ignore.
  • These methods use the target data mining algorithm as a black box to find the best subset of attributes, in a way similar to that of the ideal algorithm described above, but typically without enumerating all possible subset.
  • If you liked this, go visit my other articles on Data Mining and Machine Learning.

save | comments | report | share on


Data Science Made Easy: Data Modeling and Prediction using Orange

  • This article is about performing prediction on test data based on the models that we have trained using train data.
  • Most of the time, the data modeling and prediction part is the most interesting as it requires you to think and tweak the underlying parameters to improve the results.
  • Kindly read the first part for setup and installation if you have missed it, there will be links at the bottom for you to navigate the entire Data Science Made Easy series.
  • The first one is via two different File widgets that holds data for both train set and test set.
  • As I have mentioned in the previous article, Data Sampler widget has similar functionality as train_test_split of sklearn.
  • Linear Regression widget attempts to find the best fit line based on the data points provided.
  • The parameters are almost similar to Linear Regression as you can choose between Ridge or Lasso regularization.

save | comments | report | share on


Predicting Football Match Attendance with Machine Learning Algorithms

  • For readers who aren’t familiar with the typical machine learning workflow, I’ll use an analogy of a school teacher teaching a student a new concept to help you understand the core idea behind each step of the process.
  • Firstly, due to the large variability in AFL stadium capacities, using percentage-filled would lead to more stable predictions and would help with interpreting the model’s key performance metrics.
  • I started with the fundamental dataset (match details and attendance) and successively added in new variables from the other data files (e.g. rainfall, team memberships and stadium capacities).
  • Many machine learning algorithms wouldn’t be able to pick up this nuance simply by taking in raw date and time information, so I decided to create a dedicated variable for each match’s time-slot.
  • The resulting coefficients of my linear regression model are shown below (note that the baseline factors for the time-slot and stadium dummy variables were Saturday night and the MCG, respectively).

save | comments | report | share on


KL Divergence Python Example

  • In other words, we can use the KL divergence to tell whether a poisson distribution or a normal distribution is a better at approximating the data.
  • For distributions P and Q of a continuous random variable, the Kullback-Leibler divergence is computed as an integral.
  • On the other hand, if P and Q represent the probability distribution of a discrete random variable, the Kullback-Leibler divergence is calculated as a summation.
  • Therefore, as in the case of t-SNE and Gaussian Mixture Models, we can estimate the Gaussian parameters of one distribution by minimizing its KL divergence with respect to another.
  • Let’s see how we could go about minimizing the KL divergence between two probability distributions using gradient descent.
  • Just like before, we define a function to compute the KL divergence that excludes probabilities equal to zero.

save | comments | report | share on


GeoVec: word embeddings for geosciences

  • In order to use the valuable information contained in descriptions, my colleague Ignacio Fuentes and I decided to experiment with natural language processing (NLP) to generate domain-specific word embeddings.
  • Since most language models take into account the co-occurence of words, we used a relatively large corpus of 280,764 full-text articles related to geosciences.
  • After pro-processing the text (tokenisation, removing stopwords, etc.), we fitted a GloVe model to generate word embeddings that we could use in other applications.
  • For example, from the left panel, it is possible to generate the analogy “claystone is to clay as sandstone is to ___?
  • In the left panel it is possible to observe simple analogies, mostly syntactic since “claystone” contains the word “clay”.
  • The right panel shows an example of how the embeddings encode information from different aggregation levels.
  • The resulting interpolated embeddings actually correspond (are close) to those particle sizes, in the same order!

save | comments | report | share on


Boston Job Market for Data Analysts and Scientists : August 2019 Update

  • This is an August 2019 update of my original project where I simply aim to explore the job market for data analysts and data scientists in the Greater Boston Area.
  • Here we simply have the top 10 most common job titles.
  • One thing to note here is the high number of ‘senior’ titles here in the Boston market.
  • This is an interesting representation of the job market in Boston.
  • Liberty Mutual, based out of Boston, leading the charge with 8 distinct positions primarily in data science.
  • Excel still being required (or mentioned) in over 60% of the job listings.
  • Of course python and sql follow up excel, and R trailing a bit more than I would have expected (hoped) being mentioned in only a quarter of the listings.

save | comments | report | share on


TARGET HK: A Quick Dive Into China’s Disinformation Campaign On Twitter

  • This is a quick dive into the trove of Chinese state troll tweets released by Twitter on Aug 19.
  • Twitter said the tweets it released came from “936 accounts originating from within the People’s Republic of China (PRC).
  • The number of troll tweets were whittled down from the initial 3.6 million down to 581,070 after I filtered them out for RTs and language.
  • The phrasing of some tweets would be immediately familiar to those who follow official Chinese rhetoric and its state-driven Internet commentary.
  • This troll account sent out just 1,059 unique tweets in my filtered dataset.
  • Second, it is unclear if the Chinese state agency has some capability to alter the account creation/tweet date in order to mask its activities.

save | comments | report | share on


Selecting the right weight initialization for your deep neural network

  • With the vanishing gradient problem, the weight update is minor and results in slower convergence — this makes the optimization of the loss function slow and in a worst case scenario, may stop the network from converging altogether.
  • However, this normal random initialization approach does not work for training very deep networks, especially those that use the ReLU (rectified linear unit) activation function, because of the vanishing and exploding gradient problem referenced earlier.
  • The authors of ‘The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks’, Carbin and Frankle, tested their lottery ticket hypothesis and the existence of subnetworks by performing a process called pruning, which involves eliminating unneeded connections from trained networks based on their network prioritization or weight to fit them on low-power devices.
  • In other words, the units in the neural network will learn the same features during training if their weights are initialized to be the same value.

save | comments | report | share on


Perceptron Algorithms for Linear Classification

  • Figure 1 illustrates the aforementioned concepts with the 2-D case where the x = [x₁ x₂]ᵀ, θ = [θ₁ θ₂] and θ₀ is a offset scalar.
  • The perceptron algorithm updates θ and θ₀ only when the decision boundary misclassifies the data points.
  • Given a set of data points that are linearly separable through the origin, the initialization of θ does not impact the perceptron algorithm’s ability to eventually converge.
  • The number of the iteration k has a finite value implies that once the data points are linearly separable through the origin, the perceptron algorithm converges eventually no matter what the initial value of θ is.
  • However, this perceptron algorithm may encounter convergence problems once the data points are linearly non-separable.
  • Note that the given data are linearly non-separable so that the decision boundary drawn by the perceptron algorithm diverges.

save | comments | report | share on