The Pentagon sees artificial intelligence as a way to outwit, outsmart and dominate future opponents. But the fragile nature of AI means that without care, the technology could perhaps offer enemies a new way to attack.
the Joint Center for Artificial Intelligence, created by the Pentagon to help the US military use AI, recently formed a unit to collect, control and distribute open source and industrial machine learning models to groups in the Department of Defense. Part of that effort points to a key challenge with the use of AI for military purposes. A machine learning “red team”, known as the test and assessment group, will examine the weaknesses of pre-trained models. Another cybersecurity team is examining AI code and data for hidden vulnerabilities.
Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way of writing computer code. Instead of writing rules for a machine to follow, machine learning generates its own rules by learning from data. The problem is that this learning process, along with artifacts or errors in the training data, can lead to strange or unpredictable behavior of AI models.
“For some applications, machine learning software is just a billion times better than traditional software,” says Gregory Allen, director of strategy and policy at JAIC. But, he adds, machine learning “also breaks down in different ways than traditional software.”
A machine learning algorithm trained to recognize certain vehicles in satellite images, for example, could also learn to associate the vehicle with a certain color of the surrounding landscape. An opponent could potentially trick the AI by altering the landscape around their vehicles. With access to training data, the opponent might also be able to plant images, such as a particular symbol, which would disrupt the algorithm.
Allen says the Pentagon is following strict rules regarding reliability and safety of the software it uses. He says the approach can be extended to AI and machine learning, and notes that the JAIC is working to update DoD’s standards for software to include issues related to machine learning.
AI is transforming the way some businesses operate because it can be an efficient and powerful way to automate tasks and processes. Instead of writing a algorithm To predict what products a customer will buy, for example, a business can have an AI algorithm look at thousands or millions of previous sales and design its own model to predict who will buy what.
The United States and other militaries see similar benefits and are rushing to use AI to improve logistics, intelligence gathering, mission planning, and weapon technology. China’s growing technological capability has sparked a sense of urgency within the Pentagon to adopt AI. Allen says DoD is moving “in a responsible manner that prioritizes safety and reliability.”
Researchers are developing ever more creative ways to hack, subvert or break AI systems in the wild. In October 2020, researchers in Israel have shown how carefully altered images can confuse the AI algorithms that allow a Tesla to interpret the road ahead. This type of “adversarial attack” involves tweaking the input of a machine learning algorithm to find small changes that cause big errors.
Dawn song, a professor at UC Berkeley who has conducted similar experiments on Tesla’s sensors and other AI systems, says attacks on machine learning algorithms are already a problem in areas such as detection of frauds. Some companies offer tools to test AI systems used in finance. “Of course, there is an attacker who wants to escape the system,” she said. “I think we’ll see more of these types of issues.”
A simple example of a machine learning attack involved Tay, Microsoft’s outrageous chatbot gone wrong, which debuted in 2016. The bot used an algorithm that learned to respond to new queries by examining previous conversations. ; The redditors quickly realized that they could exploit this to get Tay to spit hateful messages.