In continuation of the AI/ML talk series, here is a short talk on U-Net. U-Net is a neural network architecture used in the computer vision tasks of image segmentation. If you are reading on mobile phone, watching the video below might be the best bet. If you are on a laptop, the notebook after the video Read More
In this new series, I will hand pick some of my favorite AI papers and just talk about it, as best as I can. I will be starting with GAN. Generative adversarial network (GAN) is a type of generative model, model that generates stuffs, as opposed to predictive models, that predicts stuffs… If you are reading Read More
In support vector machine, the goal here is to fit a gutter to (two) linearly separable groups of samples. To achieve this goal, a margin is defined by the support vectors; and an optimal hyperplane somewhere in the ‘middle’.
High dimensional datasets as they are called – as expected – are immune to traditional statistical treatments, as such galvanizing the innovation of dimensionality reduction techniques. It is one of the popular methods that we will speak to in this short notebook: principal compenent analysis.
One of the exicting things I found about learning machine intelligence is it’s amenabilities to very visible analogies. Once again, ensemble learning affords us such comfort.
Here, we have another very intuitive machine learning algorithm. In one sentence: A decision tree is a tree describing how a decision is made. No more, no less.
k-Nearest Neighbors (kNN) is one of the simplest and intuitive machine learning algorithm out there. It simply argues that a new sample should be classified based on the identity of the k (to be defined) nearest neighbors. In order words, neighbors should have the same identity. Note that this is a kind of inductive bias, that Read More
. . . and this will work by definition for a regression problem but not for a classification problem. To achieve the aim of the latter, we will need a different function where we could get something like the probability of a class membership. i.e. 𝑓 : ℝ → [0, 1]. If the probability is greater than 0.5, we predict Read More
In my last notebook we looked at a classification problem, and we defined many classification metrics. In this notebook, we will go through some regression metrics. Recall that in regression, the response value is continuous (and not categorical), as such different kind of prediction assessment will come into play.
Now, say, you have built a machine learning model; the question you ask is: ‘how well does this thing works anyways?’. To answer this question, we will need to define the performance metrics. As you might have imagined, the metrics will depend on the kind of machine learning problem in view.