In support vector machine, the goal here is to fit a gutter to (two) linearly separable groups of samples. To achieve this goal, a margin is defined by the support vectors; and an optimal hyperplane somewhere in the ‘middle’.

In support vector machine, the goal here is to fit a gutter to (two) linearly separable groups of samples. To achieve this goal, a margin is defined by the support vectors; and an optimal hyperplane somewhere in the ‘middle’.
High dimensional datasets as they are called – as expected – are immune to traditional statistical treatments, as such galvanizing the innovation of dimensionality reduction techniques. It is one of the popular methods that we will speak to in this short notebook: principal compenent analysis.
One of the exicting things I found about learning machine intelligence is it’s amenabilities to very visible analogies. Once again, ensemble learning affords us such comfort.
Here, we have another very intuitive machine learning algorithm. In one sentence: A decision tree is a tree describing how a decision is made. No more, no less.
k-Nearest Neighbors (kNN) is one of the simplest and intuitive machine learning algorithm out there. It simply argues that a new sample should be classified based on the identity of the k (to be defined) nearest neighbors. In order words, neighbors should have the same identity. Note that this is a kind of inductive bias, that Read More
. . . and this will work by definition for a regression problem but not for a classification problem. To achieve the aim of the latter, we will need a different function where we could get something like the probability of a class membership. i.e. 𝑓 : ℝ → [0, 1]. If the probability is greater than 0.5, we predict Read More
In my last notebook we looked at a classification problem, and we defined many classification metrics. In this notebook, we will go through some regression metrics. Recall that in regression, the response value is continuous (and not categorical), as such different kind of prediction assessment will come into play.
Now, say, you have built a machine learning model; the question you ask is: ‘how well does this thing works anyways?’. To answer this question, we will need to define the performance metrics. As you might have imagined, the metrics will depend on the kind of machine learning problem in view.
These are the methods involved in sampling during machine learning. In my last notebook-blog, I hinted the idea of an analogy between a 12-year-old girl studying for an exam, and our machine trying to learn…
In our ML blog-syllabus, mathematical foundations of ML should be the next stop, however I have decided to postpone this till later in the blog in order to write something more comprehensive. The reader should note that getting the maths ‘out of the way’ is very essential to deeply understand a lot of the ML algorithm out Read More