Artificial Intelligence Needs Discussion Before Further Development



By Bernice Chen, Opinions Editor

Artificial intelligence are machines that can display intelligence similar to humans and have recently become more ubiquitous. Computer scientists and companies all over the world are pouring funds into developing the best technology to use in every possible field and in everyday life to use devices such as Alexa and Siri which make our lives more convenient. Varied experiments are conducted to train machines to learn and improve on their own for more efficient application. It is guaranteed that AI will change the world, but will it be beneficial or harmful for humanity?

There are already many areas that scientists are beginning to employ this technology to help the human race. Programmers are building machines that can help the disabled complete daily tasks with voice recognition, process data and draw conclusions at incredibly fast speeds, or keep track of and diagnose medical conditions to save lives. However, one rising application of AI could be ultimately dangerous for humans – its use in the military.

More than 100 leaders in the technological industry cautioned in a letter that weaponized AI could be the catalyst to a third revolution in warfare and pledged to discourage its development. The United Nations, urged on by multiple campaigns, has been debating whether or not it should ban Lethal Autonomous Weapon Systems (LAWS) in the face of the possible repurposing of AI to build more of these weapons. The worry of these independently operating systems is that they could initiate and quickly escalate conflicts and death tolls if put in the wrong hands.

The concerns expressed by these parties are more justified when considering the fact that nations are already trying to make various types of these machines. Russia’s spending on robotic weapons programs has been rising, and the government has made it clear that even if the UN bans LAWS, it will still develop AI weapons. “Whoever leads in AI will lead the world,” Russian President Vladimir Putin stated. In terms of actual machines built, the country has already begun to implement AI-powered, unmanned aerial vehicles (UAVs) and guns into their military.

Russia is not alone, as China is also developing and researching LAWS in the form of underwater combat vehicles, combat aircraft, missiles, and cyber warfare. The exact details of what is being built and the progress level that the military has reached in regards to weaponized AI, however, remains a strictly guarded secret in China.

The support of autonomous weapons from superpowers like Russia and China has been concerning for the United States, which is why the U.S. government has also gotten involved. On Sept. 10, the Defense Advanced Research Projects Agency (DARPA) announced that it was directing $2 billion over the next five years to continue researching the implementation of artificial intelligence into their weaponry.

If the United States continues to develop this technology, there will be plenty of different weapons they can build that will be dangerous. One of these is the development of autonomous armed drones that can spot and target people. Earlier this year, Google faced heavy backlash for allowing the military to use AI technologies for their new program Project Maven, which works to take advantage of the developments of artificial intelligence for the purpose of combat and national security. Project Maven has already begun to use surveillance gathered from AI drones against the terrorist group ISIS. Employees conveyed their concerns about the close proximity that this effort established between AI and combat, and expressed anger over the lack of transparency from the company about their involvement with the Defense Department. Facing this response, Google later told its workers that the company would discontinue their cooperation with the Pentagon.

The controversy regarding Google and Project Maven is not the only partnership that has created speculation about the potential rise of AI drones in the military. On September 5, the Drone Racing League (DRL) and Lockheed Martin, a company that was the largest contractor for the U.S. government in 2015, announced a collaboration to challenge teams to design an AI drone that could fly without human intervention. Although the DRL said that the contest wouldn’t have anything to do with the military, many are still justifiably worried that the technology created from this idea will end up becoming a weaponized tool.

Algorithm-controlled UAVs that can carry out strikes or reconnaissance missions against enemies of the U.S. may be used for defense purposes, but the issues that accompany this include both the risks of innocent lives being taken if the AI isn’t a perfect system, and the loss of humane behavior when it comes to conflict. If wars are only fought with robots instead of humans, then it may become more heartless. Deaths will become more desensitized from the perspective of the attacker because machines don’t feel the same emotions that human soldiers do when they kill enemies or civilians. As Business Insider states, this would consequently cause wars to perceptually shift “from fighting to extermination.”

On a bigger scale, this type of rivalry between Russia, China, the U.S., and other countries will not end well either. While some might argue that it’s necessary for the U.S. to develop weaponized AI for the sake of defense against other first world countries, arms races like the one artificial intelligence has arguably already started often end in international tensions that escalate into conflict. Especially since a war between global superpowers would end poorly for everyone, it’s a situation best avoided. Artificial intelligence could even make nuclear weapons more likely to be used in the event of a dispute, which is dangerous since, according to the Financial Times, “judging the capability of AI-enabled military forces will be next to impossible.”

There may not be a realistic method to stop artificial intelligence from entering the military, but that doesn’t necessarily mean progress shouldn’t at least be slowed. Scientists and other campaigns devoted to petitioning against this are encouraging people to join them in their protests, and a multitude of worldwide conferences, leaders, companies, and international organizations are thoroughly discussing the ethics of AI when it comes to combat.

None of this is to say that artificial intelligence is an inherently evil tool. When applied in the right instances, as it often is and will continue to be, technology at such an advanced level benefits humans in several ways. However, some of the applications that AI is directed to, such as the military, will require much caution and discussion as its development continues. This is crucial to ensure that humanity doesn’t end up suffering because of its own creations.