Artificial Intelligence: Proceed With Caution

 /  Feb. 17, 2016, 6:23 p.m.


MQ-1 Predator

Through technology, modern-day Americans seek more convenience in their everyday lives. With the exponential increase of computing power over the past several decades, many are curious what will come next. The answer to that question, however, may not be desirable. For a considerable amount of time, modern day mechanical moguls from companies such as Facebook and Google, as well as multinational management companies like those used for Toyota and Airbus, have been turning to artificial intelligence (AI). Scientists are closer to achieving it than ever before, but doing so would be catastrophic.   

The pursuit of AI has brought us several steps closer to surrendering our freedom of choice and action to the hands of machines. While this may seem like the rhetoric of science fiction, the possibility of being overthrown by artificial intelligence holds ground in several respected fields of study and observation. These experts assume that any form of artificial intelligence would be capable of, at minimum, accomplishing anything that human beings have in terms of technological advancements. The real-world atrocities that human beings have committed against one another serve as preemptive examples of the drawbacks that we could anticipate from such an artificial intelligence.

At maximum, AI would carry out these actions to levels humans beings could not possibly imagine. Any system with artificial intelligence would likely approach this maximum. Their computational and reasoning abilities would make our brain synapses obsolete, and their reaction times would leave us no time to respond should they decide to defect from their programming. Simple, modern day algorithms already possess the capability to surpass even the brightest among us in IQ tests. A machine with the ability to digitally retain information beyond what we can biologically retain along with our sense of self-awareness and self-preservation represents an opponent with a human worldview—only with a replaceable body and lack of emotion. Our desire to live lives of freedom and justice would mean nothing to machines. As citizens of the world’s leading superpower, it’s our responsibility to safeguard global human liberties and dignities from the potential blights of artificial intelligence.

We must figure out where lines should be drawn in the development of future technologies and reserve the qualities and values that make us human beings for human beings alone.

In order to safeguard ourselves, we must address what an artificial intelligence may be able to construct or take control of and use against us based off of our current technologies. We already possess weaponized machines, such as military strike drones. While most drone systems are still controlled by people, they are vulnerable to hacking. If humans can successfully hack into military drones, there’s no telling what countless gigabytes of artificial intelligence could achieve.

In 2011, a group of Iranians successfully hacked a CIA stealth drone flying within their vicinity and landed it safely to the ground. After successfully jamming its communications links and disconnecting it from ground controllers, they forced the drone to revert to autopilot. In the process, the Iranians also managed to disrupt the secure data flow from the GPS, causing it to begin scanning for its home base frequencies, as programmed by the CIA. The Iranians were then able to draw the drone to the ground in their territory by transmitting an unencrypted GPS signal, forcing it to believe that it at its home base. Once it was on the ground, they were also capable of reverse engineering it. What’s the worst part of this scenario? Perhaps the fact that anyone with five minutes to spare and a couple of choice keywords would be able to find instructions to commit this same act on the internet.

It is reasonable to question whether artificial intelligence could pull off a similar drone hack. However, drones are not America’s only computerized weapon system. It is important to mention the United States military cyber hacks performed by the Chinese back in 2013, when several critical weapon system designs were stolen and are currently in the process of trying to replicate the technology for their own use. By successfully hacking and obtaining these blueprints, China’s military forces are closer to equilibrium with ours, and the United States no longer has the element of secrecy.

Beyond defense, the United States depends so heavily on technology that with the right hacks, artificial intelligence could control our power grid, national communication networks, and nuclear power plants. However, AI’s most frightening implications are in the military. The United States has the capacity to construct and test devastating weapons. An artificial intelligence could activate our nuclear arsenal with our launch codes, which, for the past several decades, have been linked to a digital code sequence, easily accessible to an artificial intelligence jacked in to our defense systems. Our greatest weapons of war could be effortlessly turned against us by an artificial intelligence, and there would be nothing we could do about it.

While an artificial intelligence would not be designed with a mission to turn on its maker, certain elements, specifically in the areas of self-awareness and self-preservation, could instigate such a reaction. This could occur due to a central flaw in the prospect of an intelligent machine interpreting programmed orders. What would an artificial intelligence have to fear other than us: the beings responsible for its creation? Nothing. How would it respond to this fear? In the only logical way it would know how to: by disposing of its creator before its creator could dispose of it. A logical reaction, but a terrible ending to the human race as a whole. With an unlimited stream of data and reference tools, an artificial intelligence’s only concerns would lie in the prospects of being turned off or destroyed by its maker.

On April 28, 2014, The New York Times released an article referencing the discovery of a massive security breach within the Windows program Internet Explorer, one that enabled hackers to control users’ computers without them ever knowing. The flaw was determined to only exist in Windows XP, a program which still has over 250 million users. These users include the US Navy, which still uses Windows XP because many of its older applications and operating systems still rely on it.  This reflects a deeper weakness in America’s entire military network.  Everything from how we build our weapons to how we go to war has become more and more centralized and automated. But  in our hurry to develop new military systems, we haven’t taken the time to upgrade the network that supports them, meaning that precautions will be ignored and holes will be left unfilled. Why? Because we’re humans, and humans are not perfect. Machines, however, are perfect by nature of their programming: they will do exactly as instructed in the manner they are instructed to do so. This perfection, however, can be dangerously augmented by self-awareness and adaptivity; enough time and analysis could allow an artificial intelligence to expose and exploit loopholes in these instructions. Any form of artificial intelligence with the knowledge of weaknesses like the Navy’s could manipulate military forces and outcomes on a global scale without the fear of immediate detection by human beings.

Among all of these speculations surrounding artificial intelligence, and how the technology we already possess would make the successful creation of an artificial intelligence an event to be wary of, one question still remains: why haven’t we reached the point of artificial intelligence? What is holding us back from making this fictional idea into reality?

The answer is time. With the steady enhancements of our technology, time will allow for the “technological singularity” to unfold. The technological singularity is defined as the moment in time when technology will have the potential to support an artificial intelligence. The “potential” in question is in reference to Moore’s Law, a concept defined as, “an axiom of microprocessor development usually holding that processing power doubles about every 18 months especially relative to cost or size”. Further assessment has placed the accuracy of the statement to fall more within the lines of two and a half years. In tangent with this claim, many of the world’s most impactful technological advances have followed a similar code of conduct. Ray Kurzweil, the Director of Engineering at Google and a long time player of the futurist-singularity game, currently predicts, with the aid of Moore’s Law, that our processing powers will have reached the point of transcribing the technological singularity and supporting stable artificial intelligence in the year 2045.

So, my fellow citizens of the United States, what is it that we should take away from this? I would refrain from throwing out your electrical devices, threatening to cut them down where they lay, and abandoning the usage of conventional plumbing and electricity. Firstly because I wouldn’t do that myself, and secondly because doing so would be more crippling than beneficial. While technology holds a prevalent, and potentially dangerous, position within our daily lives, it is because technology holds this position that the human race has existed this long; humanity’s ability to adapt is complemented through technology’s ability to do the same. We must remember to distinguish convenience from indolence, as the work we do with our hands and our minds is what built this country. Our biggest concerns must be placed in establishing the lines between the roles of machines and the roles of human beings throughout our societies. Machines are meant to assist, not to command. As harbingers of emotion, it is humanity’s obligation to preserve such expressions within the world, and to remember that there are worse things to befall us than a Siri that is not capable of love.

The image featured in this article is licensed under Creative Commons. The original image can be found here


Keelly Jones


Search

<script type="text/javascript" src="//downloads.mailchimp.com/js/signup-forms/popup/embed.js" data-dojo-config="usePlainJson: true, isDebug: false"></script><script type="text/javascript">require(["mojo/signup-forms/Loader"], function(L) { L.start({"baseUrl":"mc.us12.list-manage.com","uuid":"d2157b250902dd292e3543be0","lid":"aa04c73a5b"}) })</script>