High dimensional datasets as they are called – as expected – are immune to traditional statistical treatments, as such galvanizing the innovation of dimensionality reduction techniques. It is one of the popular methods that we will speak to in this short notebook: principal compenent analysis.
One of the exicting things I found about learning machine intelligence is it’s amenabilities to very visible analogies. Once again, ensemble learning affords us such comfort.
Imhotep came around the other night, and here is the bit I can remember. On Sacrifice Me: Why do we have to sacrifice, why do we have to give up something for something else? Imhotep: The act is merely a friendly reminder that you folks aren’t God. On the World in Turmoil Me: Imhotep, look Read More
Here, we have another very intuitive machine learning algorithm. In one sentence: A decision tree is a tree describing how a decision is made. No more, no less.
k-Nearest Neighbors (kNN) is one of the simplest and intuitive machine learning algorithm out there. It simply argues that a new sample should be classified based on the identity of the k (to be defined) nearest neighbors. In order words, neighbors should have the same identity. Note that this is a kind of inductive bias, that Read More
. . . and this will work by definition for a regression problem but not for a classification problem. To achieve the aim of the latter, we will need a different function where we could get something like the probability of a class membership. i.e. 𝑓 : ℝ → [0, 1]. If the probability is greater than 0.5, we predict Read More
The Book of Why By Judea Pearl CAUSATION | How do we deal with causation specially in the context of big data. There is perhaps no superior scientist to turn to than Judea Pearl. The Deep Learning Revolution By Terrence J. Sejnowski MACHINE INTELLIGENCE | A historical and contemporary treatment of one the greatest scientific breakthrough of Read More
Abacha ti ku oooo – They Danced, and if my Memory Serves me Right, I Danced Too – I Say, Baloney! – The Placenta of Yoghurt – Of Ishango Bones and Nubian Antibiotics – The African Dream and the Black Problem – The Unknown-Knowns General Sani Abacha, if you don’t know him (and I am Read More
In my last notebook we looked at a classification problem, and we defined many classification metrics. In this notebook, we will go through some regression metrics. Recall that in regression, the response value is continuous (and not categorical), as such different kind of prediction assessment will come into play.
Now, say, you have built a machine learning model; the question you ask is: ‘how well does this thing works anyways?’. To answer this question, we will need to define the performance metrics. As you might have imagined, the metrics will depend on the kind of machine learning problem in view.