A while back, I happened to complete Udacity’s Data Analyst Nanodegree. While completing my coursework, I worked on a project on Exploratory Data Analysis (EDA) (numerical and graphical examination of data characteristics and relationships before applying more formal, rigorous statistical analysis). In this project, a dataset on red wine quality was explored (using R & ggplot2) based on its physicochemical properties. The objective was to identify physicochemical properties that distinguish good quality wine from lower quality ones. I had a high sense of satisfaction when I completed my work, and I decided to write about the thought-process of how I went through the whole study, having already uploaded the source code on GitHub. There are chances that you were looking for a qualitative explanation on the subject and accidentally ended up on this post. In that case, I suggest you read this article.
It’s been a while since I enrolled myself for Udacity’s Nanodegree on Artificial Intelligence (which I genuinely rate above all the online learning experiences I have had). Amidst studying about ‘game playing agents’ during the coursework, one of the assignments was to summarize a research paper, for which I read about one of the most crucial breakthroughs in the history of Artificial Intelligence, Deep Blue.
Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computing machine to have won a chess match against a reigning world champion under regular time controls.
When IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 in a six-game chess match, Kasparov came to believe that he was facing a machine that could experience human intuition.