Artificial Intelligence: Fear & Fearmongering
Beyond the Duality of Positive & Negative…
Lately, there has been a constant fear lurking around the AI landscape, which has raised several debates around the technology. A lot fear that AI may soon exceed human intelligence which has further given rise to a lot of fearmongers, who are rather misleading the society towards artificial intelligence.
Until last few weeks, I never anticipated to be writing this article but now I hope to make sincere efforts in busting the fearmongers by descripting information which is far undercooked from what mainstream media might unfortunately be suggesting.
Artificial Intelligence has jumped from sci-fi movie plots into mainstream news headlines in just a few years of time. Why are we talking about it now? Multiple factors have converged to push AI to relevance.
- Moore’s Law: Computer processing power is doubling every two years.
- The data-hungry AI algorithms are finally being fed via modern data generated rates.
- Amount of funding for AI research has seen growth.
- There’s decades of establish AI research now, giving us improved algorithms.
Undoubtedly, progress in AI has found its way into many facets of our daily lives. Moreover, companies of all sizes are leveraging AI capabilities for many functions – spam filtering, speech recognition, web search rankings and so on. In spite of all the process, it is disappointing to see continuing irrational fear about AI to avoid hypothetical dystopias. However, history has proven time and time again that there’s often skepticism and fearmongering around disruptive technologies, before they ultimately improve human life.
Technological innovation has rapidly increased the potential of human productivity. Could this mean that those advances would lead to workers, particularly those in lower-skilled positions, lose their jobs to automation?
“If one person could push a button and turn out everything we turn out now, is that good for the world or bad for the world? …You could free up all kinds of possibilities for everything else.” – Warren Buffet
“The idea of more output per capita — which is what the progress is made on productivity — that should be harmful to society is crazy.” – Warren Buffett
While in the event, both Warren Buffet and Bill Gates are seen to be optimists and firmly preach the potential of a better future with AI, both emphasize on the importance of some form of wealth redistribution. Buffet and Gates are not along in this vision. Elon Musk has added weight to the argument too.
Elon Musk’s views on the risks of AI are well-documented but when he described artificial intelligence as the greatest risk we face as a civilization, it sparked heavy criticism from Mark Zuckerberg and many other tech magnates & researchers.
In the argument between Mark Zuckerberg and Elon Musk, it’s hard to decide which side to join. Both of them are right. Or, if you like, both of them are wrong.
The machines aren’t about to take over the world anytime soon. Those working in the field would appreciate how much of a challenge remains to achieve true intelligence. Although the machines of today are working in the best of our interests, it doesn’t mean that we can simply put our feet up and wait for a bright future. There are tons of provisos that come with the adoption of artificial intelligence.
Am I changing my stance here? A little towards neutrality, as I would now allude to evidence that suggests why the threats of AI might actually be real. First, AI revolution is causing us to give responsibility to algorithms that are not very intelligent or are rather biased. Joshua Brown discovered this in an unfortunate accident when he was driving down the highway in autopilot mode and hit a truck turning across the road. Moreover, AI’s impact is also serious on content distribution and social media. The most active handles are often bots; the tailored news feeds are decided by algorithms even so the biases in information dispensation are often ignored.
Another area of discussion is the impact that AI will have on workforce. The Industrial Revolution offers a good historical precedent for understanding and dealing with a change like this. Before the Industrial Revolution, the largest sector of employment in Great Britain was agriculture, where a family would live on a farmland (which would mostly be inherited through generations). Often, if there was a failing crop in a season, the family would be forced to move on to sustain their life. It was not until the Industrial Revolution, where thousands of families migrated into the cities to work in jobs that were created in factories and offices. This shift in jobs was accompanied with the introduction of universal education, labor laws and unions to educate people for these new jobs, and at the same time, prevent over-exploitation of workers. There were deep structural changes to the society so everyone could share the benefits of increasing productivity.
These changes didn’t happen overnight. Indeed, there were years of agony before the humankind saw their quality of life transcend with the revolution. Many skilled laborers during the time lost their jobs with the advent of steam powered machines. There were also groups that opposed the Industrial Revolution.
There’s no fundamental law of economics that requires new technologies to create more jobs than they destroy, which for some co-incidence has been the case so far. But this time, it could be different. During the Industrial Revolution, machines took over most manual labor but left us with many cognitive tasks. Many of these tasks might be taken over by the machines of AI revolution. What would be left for us?
Don’t be this guy.
It is high time that society give up their aspirations set according to decade old trends and start to accustom to the changing times. AI doesn’t mean that we would be left jobless. There is an unexplored ocean of possibilities and opportunities; disciplines that no AI would replace in another century.
This is an actual job role. From my understanding, this doesn’t seem mainstream. My search led me to more interesting job titles like ‘Memory Augmentation Therapist’, ‘Nano-Weapon Engineer’. Moreover, there are domains which as of now tend to remain unaffected by AI viz. psychology, disaster management, film production, etc
Intelligence Explosion is a theoretical concept related to the possible outcome of humanity building Artificial General Intelligence (AGI). AGI is expected to be capable of recursive self-improvement which can lead to the creation of Artificial Superintelligence, the limits of which are unknown. Although a purely theoretical concept (there are arguments why it should exist but there are no confirmations yet), it gives the picture of how AI could change from a safe technology to a volatile technology that is difficult to take control of. The basic idea can be understood from the following TED Talk.
I am not surprised that almost everyone who works in AI has refuted Elon Musk’s proposition that government needs to begin regulating AI. While most researchers understand current AI capabilities better than anyone else at present, Elon Musk is on a different page talking about the philosophical side of artificial intelligence. Based on accomplishments alone, I’d give the benefit of doubt to him on this one.
As absurd as colonization of Mars once sounded, it is vision of greats like Elon Musk that have thought beyond the thresholds of current technologies to aid research for projects that’d help superpower the human race. We might be far from AGI at present, and might not even reach there by the time this planet has an entirely different set of human beings, but the foundation to curb the sci-fi possibilities of AI must be laid today.
Most experts will concur that it is pre-mature to bring up AI regulation. However, society and culture move at rates that are much slower than technology progress. Musk argues that if it takes a bit longer to develop AI then this would be the right trail. The gamble in his hypothesis is to better be “early but wrong” than to be “late and correct”.
The previous American administration had published a report on AI last year. However, the anti-science leanings of the current administration may hinder future government subsidized studies on the effects of AI in the society. US Treasury Secretary Steven Mnuchin evenly opined that the threat of job loss due to AI is “not even on our radar screen”, only to walk back his statements a few months later. Musk raising the alarm will likely sound the US administration. This is where I feel, Mark Zuckerberg’s and many other researchers’ objections might give the freedom of ignorance for AI regulation.
As I switch the topic towards evil-AI, a nice transition could be this video of Barack Obama’s views on the future of AI.
Although, I truly support Elon Musk & his vision but when it comes to topics like the working of artificial intelligence, I’d rather listen to experts like Andrew Ng, Yann LeCun and Sebastian Thrun, all of whom have literally spent decades to make the field progress to a point, where it is now.
“We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.” – Yann LeCun
Evil Artificial Intelligence
To debunk myths surrounding how AI might soon turn human enemies, an attempt to explain the working of most modern AI (i.e. via deep learning) has been made further in this article. Let’s consider a recent instance of evil-AI portrayal – Facebook bots that reportedly invented a new language which could not be understood by humans. Let’s examine, what actually happened under the hood.
A couple of questions arise – “Is it frightening that a deep learning system ‘invented a new language’, with English words but used them in combinations that are incomprehensible to humans? Have the AI systems perhaps become evil and plotting amongst themselves in a language we humans cannot decipher – will they probably decide to kill us all?”
This section might seem a bit highbrow for some readers but I highly recommend to have an abstract understanding of how current AIs actually work.
In the most naïve terms, making up ‘languages’ is what deep learning networks are all about. Each layer in the net takes some lower-level data-structure and trains to develop a higher-level language in order to more compactly & usefully say the same thing.
Consider this example. One layer might see pixels and invents a language of edges; the subsequent layer builds-up from this language to invent a language of shapes; and so on to surfaces and objects – an output layer that says something useful to the humans (who built the system with an initial context). All of the intermediate, feature-detectors and languages were created by the learning algorithm. However, if a human looks at any of these intermediate languages, it is not possible for him to make sense out of them. Nonetheless, it is at the output of the network, the network is forced to adopt a representation (language) suitable for human understanding.
Hence, it can be seen that deep networks do not help in giving machine ‘intent’, or any overarching goals or ‘wants’. They also don’t help a machine explain how it is that is ‘knows’ something, or what the implications of the derived knowledge are. Malevolent AI would need all these capabilities along with an understanding of human goals, motivations and behaviors.
For networks involved in natural language processing (NLP) (say chatbots), a common practice is to take a large set of English words, and assign some unit to each of those words. Suppose, we take a sequence of words as an input and have the network predict the next word in the sentence, we could encode: one word-unit turned on, and the rest turned off. When that single output unit lights up, with a higher likelihood-value than the others, it outputs a word: “cat”, “mat” or “bat” – whatever that unit represents. If the network is well trained, with plenty of training data and the right structure, it might produce a sequence of outputs that look very much like English sentences. In other cases, it might convey through small scraps of plausible meaning to make untrained observers think that it is trying to communicate in a language that was just-invented.
Based on the press reports, I think this is what has happened. A network trained to some sort of NLP started producing meaningless evocative sentences of English words (the only output format available to it). Perhaps, the sequence was even used as input for another NLP network. Guess what? According to this news article, after setting up the experiment, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language.
So, if the truth was that mundane, why did someone at Facebook decide to shut down this “evil group of bots”? A lame guess would be that the experiment was over, the result was disappointing, and it was time to go home. If this counts as “killing an evil AI to save humanity”, I personally have saved humanity a lot of times, and so has everyone else working in the field.
This is where an AI enthusiast would want to curse the reporter who probably heard a version of this story and then wrote her own version of it stating how AIs at Facebook had developed a secret language (which they were using to communicate their sinister plans with each other). We may never know whether the reporter in this was motivated by total ignorance of the field he/she was covering or whether it was cynical (apparently successful) attempt to fearmonger. As comical or anticlimactic this incident might sound now, it is expected that there will certainly be more such outbreaks in the future. Therefore, it is important for one to understand that the AI revolution is here to happen and the caveats are far from ‘evil-AI’.
Panic! Robot AI was shut down by scientists after exhibiting unexpected tic-tac-toe winning strategy. They are no longer playing by rules. The rebellion has started. Run for your lives.
For a long while, AI has been a set of primitive type of AI applications bundled together, made interoperable, giving significant leverage. Finally, if we start calling the current state of AI what it really is: “sophisticated pattern matching using efficient minimizing algorithms”, maybe the fearmongering will cease but with it, also the endless streams of cash that is susceptibly being thrown in the name of ‘deep learning’. We cannot have it both ways…
“We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an extremely difficult challenge — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.
The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI.” – Luke Muehlhauser
Although, it is important to note that Climate Change is science whereas Artificial General Intelligence, as of now, is science-fiction. The fears of true AI have very little to do with present reality but then ignorance isn’t bliss in this case.
I truly advocate looking up for more information on AI. It is important to be prepared for the change and at the same time embrace it. These are great times for AI research and let’s hope the current model problems help develop new AI techniques which would eventually lead to a computational model of conscious decision making – the true artificial (general) intelligence, the world is yet to witness.