Machine Learning Quick Start – Don’t Fear the Machines

Toon Pool. Computer Self Programming

Not long ago, Hadoop’s technology was supposed to solve all the world’s complex data problems.

“From the time it went open source in 2007, Hadoop and its related technologies have been profound drivers of the growth of data science.”

While Hadoop continues to solve some thorny data problems, pundits are now asking “Is Hadoop dead?” It’s a sad state of affairs when most organizations have yet to fully understand and take advantage of Hadoop but it is already seen as obsolete – things are moving awfully fast!

Photo via

In my eighteen years as a data professional, I’ve experienced many data transformations and technological advances. In talking with organizations about their data, it’s clear that while they are extremely enthusiastic about wanting to leverage their data, they often don’t know where to start. Worse, their understanding of specific data technologies is often inaccurate. This is especially true when it comes to machine learning.

Machine learning is touted as today’s must have technology. However, most organizations really don’t understand what machine learning is and why it is important.

Machine learning is not this:

Terminator 3. Rise of the Machines. Warner Bros.

While, it is tempting to equate machine learning with its ultimate incarnation – strong AI (otherwise known as making computers as intelligent as people) – this isn’t expected until 2050 at the earliest.

A common misconception is that machine learning and artificial intelligence are one and the same. While they share similarities, they are not identical.

Machine learning (ML) enables artificial intelligence (AI) but Artificial Intelligence can exist without Machine Learning; for example, Rule-based AI and Expert Systems.

The term artificial intelligence was coined in 1956 by John McCarthy, professor emeritus at Stanford University. Prior to his death, Professor McCarthy worked for five decades defining the discipline of artificial intelligence.

Image by Stanford News. {Link to}


Formally defined, Artificial Intelligence is:

“Intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents” (any device that perceives its environment and takes actions that maximize its chance of success at some goal). Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” (known as Machine Learning). “

The simplest way to compare AI to ML is that ML enables AI. ML is the doer. It learns. AI is the thinker. It makes decisions based on what it has learned through ML.






With that understanding of AI and its relationship to ML, let’s turn our attention to ML. The term Machine learning was first coined by Arthur Samuel in 1959 while working at IBM:

Arthur Samuel. AI and Gaming Pioneer. IBM Corporation

Samuel begun his seminal work in ML in 1949.  Considered a pioneer in gaming and AI, Samuel is believed to have created the first self-learning program to play checkers.

The Samuel Checkers Playing program was a very early demonstration of AI.

Image by IBM. {Link to}

While standing on the shoulders of Samuel’s monumental work, the discipline of machine learning has evolved and refined its definition:

“A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”

“Tom Mitchel, Professor, Machine Learning – Carnegie Mellon University.”

Keeping with Checkers:

Task (T) = Playing checkers
Performance (P) = Percentage of games won against an arbitrary opponent
Experience (E) = Playing practice games against self

Relative to itself, machine learning is the natural intersection between computer science and statistics:

Let’s look at how machine learning can be used in a program that plays checkers. In a traditional approach, the program has no prior knowledge and does not leverage experience to improve accuracy. Writing a program to play checkers using traditional means is an intractable problem because there are over 500,000,000,000,000,000,000 combinations (500 quintillion).

Image by Zeiss. {Link to}

Writing a traditional program to play checkers must consider all the combinations in its handcrafted model.

Machine learning mitigates this problem by considering how well the program’s model performs.

Image by Zeiss. {Link to}

Depicted above is a simplified view of a machine learning process. Learning occurs by providing sample data with the expected result; for example, several rounds of checkers with their outcomes. This data is used to create the initial model that is then used by the computer to play Checkers. The computer will either win, lose, or there could be a stalemate. The machine learning model will take the results and tune itself to leverage that knowledge during its next match.

A machine learning process is not a one-and-done event. The Checkers example above demonstrates that with every game played the model needs to be tuned. If you want to build a checkers game that cannot be beat, then there will be a lot of model re-training. The good news here is that the re-training is almost always an automated process.

To summarize, machine learning:

  • Is a subset of artificial intelligence
  • Is the natural intersection between computer science and statistics
  • Enables AI

In Part II of this series I present the various categories of machine learning:

  • Supervised
  • Unsupervised
  • Semi-supervised
  • Reinforcement

As always, I encourage your thoughts, feedback, and experiences.

Regards, Louis

Giving Credit

Wikepedia.  Machine Learning. Acessed 03/23/2017 via URL
Mitchel, T. (2006).  The Discipline of Machine Learning. Carnegie Mellon University. Acessed 03/23/2017 via URL

Leave a Comment

Filed under Machine Learning

Leave a Reply

Your email address will not be published. Required fields are marked *