Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "ml"


Machine Learning Basics: Polynomial Regression

  • In this article, we will go through the program for building a Polynomial Regression model based on the non-linear data.
  • In the previous examples of Linear Regression, when the data is plotted on the graph, there was a linear relationship between both the dependent and independent variables.
  • So, in this problem we have to train a Polynomial Regression model with this data to understand the correlation between the Level and Salary of the employee data in the company and be able to predict the salary for the new employee based on this data.
  • The class “LinearRegression” is also imported and is assigned to the variable “lin_reg” which is fitted with the X_poly and y for building the model.
  • In this step, we are going to predict the values Salary based on the Polynomial Regression model built.

save | comments | report | share on


Labeling Data with Pandas

  • Data labeling is the process of assigning informative tags to subsets of data.
  • Data containing x-ray images of cancerous and healthy lungs along with their respective tags is an example of labeled data.
  • Upon obtaining a labeled data set, machine learning models can be trained on the labeled data and used to predict on new unlabeled examples.
  • In this post, we will discuss the process of generating meaningful labels using the python Pandas library.
  • The data is now appropriately labeled for training a ternary classification model.
  • To summarize, in this post we discussed how to use Pandas for labeling data.
  • First, we considered the task of assigning binary labels to wine data that indicates whether a wine is above 10% alcohol by volume.
  • We then took a look at assigning ternary labels that indicate the level of fixed acidity in the wines.

save | comments | report | share on


5 Python Code Smells You Should Be Wary Of

  • Code smells are a sign of weakness or design flaw that might cause readability, maintainability, and scalability issues.
  • In the next few sections, we’ll look at a few quintessential Python code smell cases and how to avoid them.
  • Using default arguments is a fairly common exercise in Python wherein you can set predefined values in the function and opt to change it at call time.
  • This is useful when setting literals, numbers, or booleans as it helps you prevent a long list of parameters with redundant values.
  • Hence using None as the default value and assigning the mutable variable inside a function is safer bet as you won’t end with maintainability issues.
  • Not only is it less pythonic since it forces you to access elements through an explicit index variable but there’s also a case of readability issues.

save | comments | report | share on


Tableau: Unleashing the Power of Visual Analytics

  • Confused, I consulted multiple online resources on how to use Tableau and I came across Ben Jones’ Communicating Data with Tableau: Designing, Developing, and Delivering Data Visualizations book.
  • Moreover, I learnt how to use features like filters, pages, marks and others to build high quality worksheets.
  • Finally, I got the hang of utilizing the dashboard feature to build interactive visual dashboards that can be used to tell stories to people.
  • Below is a short summary of how Tableau is used to build beautiful interactive visual interfaces.
  • Another aspect that I love about Tableau is the fact that I can easily build my charts by dragging fields from the table section and integrating them with the features used to create the worksheets.
  • I used my knowledge of Tableau to build a scatter plot of Life Expectancy vs Income and make an interactive dashboard.

save | comments | report | share on


Regularization — Part 3

  • At the input of the layer, you start measuring the mean and the standard deviation of the batch.
  • So what you do is you compute over the mini-batch, the current mean and standard deviation, and then you use that to normalize the activations of the input.
  • If you want to move on towards test time, you compute — after you finish the training — the mean and the standard deviation in the batch normalization layer once for the entire training set and you keep it and constant for all future applications of the network.
  • But if you want to have independent features that allow you to recognize different things, then you somehow have to break the correlation between the features and this can actually be performed by dropout.

save | comments | report | share on


How Floating Point Numbers Work

  • In this article we’ll dig into the nuts and bolts of floating point numbers, cover the edge cases (numerical underflow and overflow), and close with applications: TPU’s bfloat16 format and HDR imaging.
  • For example, if we have an 8 bit unsigned integer, we can represent numbers between 00000000 and 11111111.
  • Since most recently produced personal computers use a 64 bit processor, it’s pretty common for the default floating-point implementation to be 64 bit.
  • The exponent is an 11-bit biased (signed) integer like we saw before, but with some caveats.
  • With a standard half-precision float (5 exponent bits, 10 significand bits), the smallest number bigger than 1 is about 1.001.
  • If you read the Google blog post about their custom 16-bit float format, you’ll see they talk about “dynamic range.” In fact, this something similar is going on with HDR images (like the ones you can capture on your phone).

save | comments | report | share on


What is a Full Stack Data Scientist?

  • The scope of a full stack data scientist covers every component of a data science business initiative, from identifying to training to deploying machine learning models that provide benefit to stakeholders.
  • A full stack data scientist must be able to identify and understand business problems (or opportunities) that can be solved using the data science toolkit.
  • This skill is essential because useful machine learning models cannot be built without data understanding.
  • Lastly, a full stack data scientist must have the skill to deploy model pipelines to production.
  • Two key underlying skills of the full stack data scientist are the ability to design a system or process and the ability to quickly pick up new technologies.
  • These two elements are the keys to any organization extracting value from data science — solving the right problems and making them accessible to the end-user.

save | comments | report | share on


Model with TensorFlow and Serve on Google Cloud Platform

  • In this guide, we learn how to develop a TensorFlow model and serve it on the Google Cloud Platform (GCP).
  • We consider a regression problem of predicting the earnings of products using a three-layer neural network implemented with TensorFlow and Keras APIs. In this guide, we will use the TensorFlow 2.1.0 and Google colab runtime environment.
  • Google colab offers the training of machine learning models on free GPU and TPU.
  • To define the deep learning model, we use the Keras API module shipped with TensorFlow 2.1.0 We start with a basic model with two intermediate relu layers.
  • Let us read the sample input data and test our model to predict and save the model locally for future use.
  • In this guide, we have learned about training the deep learning models with TensorFlow 2.1.0 and deploying the trained model on the google cloud platform.

save | comments | report | share on


My 10 favorite resources for learning data science online

  • That is why in this article, I want to share my 10 favorite data science resources (online ones), which I frequently use for learning and trying to keep up with the current developments.
  • In the talks you can find a mix of general Python best practices, examples of real-life cases the data scientists worked on (for example, how they model churn or what tools they use to generate an uplift in their marketing campaigns), and introductions to some new libraries.
  • I started my data science journey with R, and even after switching my main programming language to Python I still follow R-bloggers.
  • The list of people to follow will highly depend on the scope of your interests, for example, if you focus on deep learning used for computer vision or maybe NLP.

save | comments | report | share on


Deep Learning in Healthcare — X-Ray Imaging (Part 4-The Class Imbalance problem)

  • Also, it should be noted, while dealing with medical images, the final accuracy (both train accuracy or validation accuracy) of the model is not the right parameter to base the model’s performance on.
  • Important Note — Oversampling should be done on train data, and not on test data as if test data contains artificially generated images, the classifier results we will see would not be a proper interpretation of how much the network actually learned.
  • This function, creates all the artificially augmented images for normal and viral pneumonia images, till they reach the difference in values from the total bacterial pneumonia images.
  • Now that the class imbalance problem is dealt with in the next part we will look into image normalization and data augmentation using Keras and TensorFlow.

save | comments | report | share on