A while back, I happened to complete Udacity’s Data Analyst Nanodegree. While completing my coursework, I worked on a project on Exploratory Data Analysis (EDA) (numerical and graphical examination of data characteristics and relationships before applying more formal, rigorous statistical analysis). In this project, a dataset on red wine quality was explored (using R & ggplot2) based on its physicochemical properties. The objective was to identify physicochemical properties that distinguish good quality wine from lower quality ones. I had a high sense of satisfaction when I completed my work, and I decided to write about the thought-process of how I went through the whole study, having already uploaded the source code on GitHub. There are chances that you were looking for a qualitative explanation on the subject and accidentally ended up on this post. In that case, I suggest you read this article.

#### Blog Posts

In this post, we are going to talk about mathematical optimization. This term is not to be confused with the word ‘optimization’ that we use in our everyday lives, for instance, improving the efficiency of a workflow. This kind of optimization means to find an optimal solution from a set of possible candidate solutions. An optimization problem is generally given in the following way: one, there is a set of variables we can play with, and two, there is an objective function that we wish to minimize or maximize.

Let’s build a better understanding of this concept through an example. For instance, let’s imagine that we have to cook a meal for our friends from a given set of ingredients. The question is, how much salt, vegetables, and meat goes into the pan. These are the variables that we can adjust, and the goal is to choose the optimal amount of these ingredients to maximize the tastiness of the meal. Tastiness will be our objective function, and for a moment, we shall pretend that tastiness is an objective measure of a meal.

Lately, there has been a constant fear lurking around the AI landscape, which has raised several debates around the technology. A lot fear that AI may *soon* exceed human intelligence which has further given rise to a lot of fearmongers, who are rather misleading the society towards artificial intelligence.

Until last few weeks, I never anticipated to be writing this article, but now I hope to make sincere efforts in busting the fearmongers by describing information which is far undercooked from what mainstream media might, unfortunately, be suggesting.

Artificial Intelligence has jumped from sci-fi movie plots into mainstream news headlines in just a few years of time. Why are we talking about it now? Multiple factors have converged to push AI to relevance.

It’s been a while since I enrolled myself for Udacity’s Nanodegree on Artificial Intelligence (which I genuinely rate above all the online learning experiences I have had). Amidst studying about ‘game playing agents’ during the coursework, one of the assignments was to summarize a research paper, for which I read about one of the most crucial breakthroughs in the history of Artificial Intelligence, Deep Blue.

Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computing machine to have won a chess match against a reigning world champion under regular time controls.

When IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 in a six-game chess match, Kasparov came to believe that he was facing a machine that could experience human intuition.

Scientists may not be as exciting as they were in the early 20th century. In an era where the world is waffling in its commitment to natural sciences, it is reassuring to hear Stephen Hawking defend this esoteric field for its own sake. As the title implies, ‘A Brief History of Time’ is a succinct review of this challenging task, providing the reader with a summary of key cosmological ideas including multidimensional space, the inflationary universe, and the cosmic fates that explain the construction and potential destruction of the universe. He discusses two major theories, relativity, and quantum physics, that modern scientists use to describe the universe. Finally, he talks about the search for a unifying theory that coherently explains everything in the universe.