In this post, we are going to talk about mathematical optimization. This term is not to be confused with the word ‘optimization’ that we use in our everyday lives, for instance, improving the efficiency of a workflow. This kind of optimization means to find an optimal solution from a set of possible candidate solutions. An optimization problem is generally given in the following way: one, there is a set of variables we can play with, and two, there is an objective function that we wish to minimize or maximize.

Let’s build a better understanding of this concept through an example. For instance, let’s imagine that we have to cook a meal for our friends from a given set of ingredients. The question is, how much salt, vegetables, and meat goes into the pan. These are the variables that we can adjust, and the goal is to choose the optimal amount of these ingredients to maximize the tastiness of the meal. Tastiness will be our objective function, and for a moment, we shall pretend that tastiness is an objective measure of a meal.

## Checkmate: Artificial Intelligence’s Game Playing Challenge

It’s been a while since I enrolled myself for Udacity’s Nanodegree on Artificial Intelligence (which I genuinely rate above all the online learning experiences I have had). Amidst studying about ‘game playing agents’ during the coursework, one of the assignments was to summarize a research paper, for which I read about one of the most crucial breakthroughs in the history of Artificial Intelligence, Deep Blue.

Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computing machine to have won a chess match against a reigning world champion under regular time controls.

When IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 in a six-game chess match, Kasparov came to believe that he was facing a machine that could experience human intuition.

### Site Footer

Insert math as
$${}$$