Page Nav




Breaking News


Artificial Intelligence In Warfare

     Artificial Intelligence (AI) is becoming a part of modern warfare and defense measures While artificial intelligence is often h...

     Artificial Intelligence (AI) is becoming a part of modern warfare and defense measures
While artificial intelligence is often hyped up as a business savior and derided as a job killer, the question of AI ethics also comes in the minds whenever people discussing military technology, particularly in the wake of the Project Maven furore at Google. (Read the text of the letter.) which is part of the US Army Research Lab under the US Department of Defense and Joint Artificial Intelligence Center. AI has broadened the scope of modern application include war machines. It is due to this competency the technology offers, that scientists have started applying AI in the defense sector to patch up the limitations a human being has. 

AI obsession in the military application is not far away. The reality is that AI is already a growing repository of the modern military strategy of many countries, while the NATO and other countries such as China and Russia are increasingly lapped up its engagement for national defense as well security. Just this month, the Pentagon has released a memo that calls for the rapid adoption of AI in all aspects of the military and asked for the collaborative help of big tech firms.

Earlier in the year, US had sought clearer ethical guidelines for the use of AI. Dana Deasy, CIO at the US Department of Defense, told press: “We must adopt AI to maintain our strategic position and prevail on future battlefields.” Oracle, IBM, Google and SAP have all indicated interest in working on future Department of Defense AI projects. When people think of the use of AI by the military, they may first think of the ‘killer robots’ or autonomous weapons that many have warned about. While AI weapons are a stark reality, many of the deployments involve uses of latest tech such as automated diagnostics, defensive cybersecurity, and hardware maintenance assistance.

The contentious use of facial recognition by US immigration authority ICE can also be considered a deployment of AI in an increasingly militarised landscape. The usage of AI in defense is plentiful. Antony Edwards is COO of Eggplant, a provider of continuous intelligent test automation services which has some clients in the defense space. These services are used by NASA to ensure all the systems in the Orion spacecraft digital cockpit are behaving correctly. “That these instruments are showing the correct information and entering information into the instrument has the correct effect, is clearly critical to mission success,” Edwards explained. The Federal Aviation Administration also uses Eggplant to ensure its digital displays are correct: “ie if an aircraft comes into the monitored airspace, it shows on the appropriate screen in the appropriate way.”

How should AI be approached? 

According to an Electronic Frontier Foundation (EFF) white paper geared towards militaries, there are certain things that can be done to approach AI in a thoughtful way. These include supporting civilian leadership of AI research, supporting international agreements and institutions on the issues, focusing on predictability and robustness, encouraging open research and dialogue between nations, and placing a higher priority on defensive cybersecurity measures. Looking at ethical codes, some legal experts argue that ethics themselves are too subjective to govern the use of AI, according to the MIT Technology review.

AI applications

With giant leaps in the domains of AI and robotics, drones, and intensive hacking toolkit against the national defense's system which are no longer limited to sci-fi movies.  The applications of AI in the military environment are seeing rapid advancements with every passing day. 
  • 1. Military drones for surveillance: The popularity of military drones has skyrocketed in recent years. Drone technology has come a long way since its inception and is now finding application in unmanned aerial vehicles. These remote-controlled vehicles carry out all tasks, right from inspecting a terrain to flying an unmanned aerial vehicle. Military units across the world are employing drones to: Channel remote communication, both video and audio, to ground troops and to military bases, Track enemy movement and conduct reconnaissance in unknown areas of a war zone Assist with mitigation procedures after a war by searching for lost or injured soldiers, and giving recovery insights for a terrain Aid with operations like peace-keeping and border surveillance 
  • 2. Robot soldiers for combat: While drones help in guarding aerial zones, robots can be deployed on land to assist soldiers in ground operations. These high functionalities, intelligent robots, designed with such strategic goals, add a cutting edge to technology in the defense sector. With advancements in machine learning and robot building, scientists have succeeded in building bipedal humanoid robots to execute a variety of search and rescue operations, as well as, to assist soldiers during combat. Robot fleets function like soldier units and carry out collaborated armed activities using multiple techniques. They are self-reliant, adaptable, and have their fault-tolerant systems, all of which contribute to their ability to make and execute decisions swiftly and competently.
  • 3. Intelligent Management: While military tactics are being continuously improved, there also needs to be an improvement in the way information is analyzed in the army bases. The data collected by drones and robots, while on the war field, needs to be structured and grouped in an organized manner to make the information insightful. Satellite imagery, terrain information, and data from multiple sensors can be used to create situational awareness by applying deep learning, statistical analysis, and probabilistic algorithms to such data. 
  • 4. Cybersecurity: With a lot of military sites being digitized, it is necessary to secure the information stored on these web portals. AI comes to the rescue by offering cybersecurity options as a response to the malware, phishing, and brute force attacks on data centers and government websites.
Human rights issues

Many leading human rights organizations argue that the use of weapons such as armed drones will lead to an increase in civilian deaths and unlawful killings. Others are concerned that unregulated AI will lead to an international arms race. This is a concern for many who are not convinced that AI, as it exists now, should be deployed in certain circumstances, due to vulnerabilities and a lack of knowledge of the weaknesses in certain models.
AI expert David Gunning spoke about the issues: “We don’t want there to be a military arms race on creating the most vicious AI system around … But, to some extent, I’m not sure how you avoid it. “Like any technology arms race, as soon as our enemies use it, we don’t want to be left behind. We certainly don’t want to be surprised.” Edwards believes that more awareness of AI among software acquirers is an important element when it comes to using it in these contexts. “AI breaks many of the assumptions that people make about software and its potential negative impacts, so anyone acquiring a product that includes AI must understand what that AI is doing, how it works, and how it is going to impact the behavior of the software.

 “They must also understand what safety mechanisms have been built in to protect against errant algorithms.” AI ethics can be unclear Luca De Ambroggi is senior research director of AI at IHS Markit, with decades of experience in AI and machine learning. He says that when it comes to military projects, ethics “can get very muddy”. He added: “AI ethics are generally complex at a global level precisely because we share different cultures and have different values. 

 AI usage will remain with the human operator for now, as it is still intended to aid humans at a tactical and command level. “For this reason, it is vital a code is developed and adhered to. However, we must continue to research the benefits and pitfalls of widespread AI application and implementation within military usage, to further inform the ethics of AI.” Who makes the call? Principal technology strategist at Quest, Colin Truran, got to the core of the issue when it comes to AI ethics in a general sense: “The current overarching conundrum surrounding AI ethics is really in who decides what is ‘ethical’. 

AI is developing in a global economy, and there is a high likelihood of data exchange between multiple AI solutions.” Ultimately these are ethical quandaries that will likely take years to find an answer to if such a feat is even possible. As the EFF notes, the next number of years will be a critical period in determining how militaries will use AI: “The present moment is pivotal: in the next few years either the defense community will figure out how to contribute to the complex problem of building safe and controllable AI systems.

In January 2019, the head of U.S. Army acquisitions said that by allowing artificial intelligence to control some weapons systems may be the only way to defeat enemy weapons. U.S. military has embraced AI, arguing that America cannot compete against potential adversaries such as Russia and China without the futuristic technology. Concern over placing machines in charge of deadly weapons has prompted military officials to adopt a conservative approach to AI, one that involves a human in the decision-making process for the use of deadly force. 

Bruce Jette, assistant secretary of the Army for Acquisitions, Logistics and Technology (ASAALT), said it may not be wise to put too many restrictions on AI teamed with weapons systems. "People worry about whether an AI system is controlling the weapon, and there are some constraints on what we are allowed to do with AI," he said at a Jan. 10 Defense Writers Group breakfast in Washington, D.C. There are a number of public organizations that have gotten together and said, "We don't want to have AI tied to weapons," Jette explained. 

The problem with this policy is that it may hinder the Army's ability to use AI to increase reaction time in weapon systems, he said. "Time is a weapon," Jette said. "If I can't get AI involved with being able to properly manage weapons systems and firing sequences then, in the long run, I lose the time deal. "Let's say you fire a bunch of artillery at me, and I can shoot those rounds down, and you require a man in the loop for every one of the shots," he said. "There are not enough men to put in the loop to get the job done fast enough."

 Jette's office is working with the newly formed Army Futures Command (AFC) to find a clearer path forward for AI on the battlefield. AFC, which is responsible for developing Army requirements for artificial intelligence, like the established a center for AI at Carnegie Mellon University, he added that ASAALT will establish a "managerial approach" to AI for the service.