The Mouse to Elephant Curve – Dinner with Thomas Robert Malthus – The Kidney of Your Business – COVID-19 and the Intellectuals – In Case you are Wondering, Nero killed Saint Peter and Paul – Virtue-cum-Ethics Scaling Abstract: Friends, in this eclectic essay, I explored the implication of scaling in 1) metabolism/biology 2) business 3) Read More
Link to Vlog: YouTube | Blog I probably started playing goalie when I was 10. And then I go on and off as a striker. I am still pretty good on my feet but I grew to love being a goalie. In secondary school, my classmates will kick the ball at me after school hours, Read More
In support vector machine, the goal here is to fit a gutter to (two) linearly separable groups of samples. To achieve this goal, a margin is defined by the support vectors; and an optimal hyperplane somewhere in the ‘middle’.
High dimensional datasets as they are called – as expected – are immune to traditional statistical treatments, as such galvanizing the innovation of dimensionality reduction techniques. It is one of the popular methods that we will speak to in this short notebook: principal compenent analysis.
One of the exicting things I found about learning machine intelligence is it’s amenabilities to very visible analogies. Once again, ensemble learning affords us such comfort.
Imhotep came around the other night, and here is the bit I can remember. On Sacrifice Me: Why do we have to sacrifice, why do we have to give up something for something else? Imhotep: The act is merely a friendly reminder that you folks aren’t God. On the World in Turmoil Me: Imhotep, look Read More
Here, we have another very intuitive machine learning algorithm. In one sentence: A decision tree is a tree describing how a decision is made. No more, no less.
k-Nearest Neighbors (kNN) is one of the simplest and intuitive machine learning algorithm out there. It simply argues that a new sample should be classified based on the identity of the k (to be defined) nearest neighbors. In order words, neighbors should have the same identity. Note that this is a kind of inductive bias, that Read More
. . . and this will work by definition for a regression problem but not for a classification problem. To achieve the aim of the latter, we will need a different function where we could get something like the probability of a class membership. i.e. 𝑓 : ℝ → [0, 1]. If the probability is greater than 0.5, we predict Read More
The Book of Why By Judea Pearl CAUSATION | How do we deal with causation specially in the context of big data. There is perhaps no superior scientist to turn to than Judea Pearl. The Deep Learning Revolution By Terrence J. Sejnowski MACHINE INTELLIGENCE | A historical and contemporary treatment of one the greatest scientific breakthrough of Read More