In this article, we’ll take a look at several military drones and UAVs with AI capabilities. There are a variety of use cases for AI when it comes to drone technology. The military seems to commonly apply AI for allowing its drones to fly on their own, which requires machine vision.
The US military seems to have started using drones in the early 2000s, and now they are starting to apply AI to these drones for possibly greater advantage. The military uses drones for ISR (Intelligence, Security and Reconnaissance) missions every day.
This article covers four real-world use cases from four military defense contractors that claim to offer either drones or the AI software behind them:
Sheild.AI offers Nova, an autonomous drone for indoor navigation
AeroVironment offers the Raven series, the world’s most widespread military drone
Lockheed Martin offers Desert Hawk III, a drone that can purportedly train operators on how to use it in the field
Neurala offers Neurala Brain, an AI software currently that could allow military drones to conduct reconnaissance and patrol missions
We’ll begin this report with a general overview of AI military drones and UAVs:
AI Drones and UAVs in the Military – Insights Up Front
Drone technology itself is a relatively new area of military technology, but military engineers took to combining drones with artificial intelligence rather quickly to create a product that in some cases might be comparable to the performance of human reconnaissance teams. Shield.AI, AeroVironment, and Lockheed Martin all showcase how military defense contractors are combining current computer vision technology and image recognition with drones to solve military problems without the need of endangering human life. Sheild.AI’s drone can purportedly navigate unknown lands without the need for GPS tracking.
Autonomous drones could allow military operators to focus their own efforts on more pressing actions that engage their skillsets. As an example, autonomous drones can be used to monitor the territory a squad of soldiers has just cleared in combat, making sure enemy reinforcements are not planning to catch them by surprise.
There may be many benefits to autonomous drones on the battlefield and those benefits, if harnessed correctly, could save the lives of operators and human resources of a military. This may be why the field of drones and artificial intelligence is bursting with real-world application and companies that have excellent real-world traction.
All of the companies in this report employ CTOs from premier universities, and the US Department of Defense is already using some of their products.
We’ll start our analysis of the state of AI-based drone technology in the military with a general explanation of how a drone might fly on its own:
How Autonomous Drones and UAVs Work Using Machine Vision
All of the drones highlighted in this report can fly without a human operator. Although there are differences in the exact design and abilities of each drone, in general, autonomous drones are similarly trained.
Implementing artificial intelligence for drones is a combination of mechanical devices, navigational instruments, and machine vision. The AI behind the drone needs to be trained using a supervised learning process.
First, a human operator pilots the drone themselves to collect visual and spatial data from the cameras and lidars; this operation is recorded. People then label objects in the resulting recordings, such as a wall, mountain, or cliffside. The newly labeled recordings are then run through the machine learning algorithm that is planned to operate the drone.
This would train the drone to distinguish between objects within the field of vision of its mounted camera. The algorithm would also correlate instances of turns and stops to the objects that the drone sees in its camera’s field of vision. This would in essence train the drone to stop or turn when it encountered certain objects.
The vehicle could then get a command to move to a new location. The algorithm behind the software would then be able to move itself and its operational payload (for example, the listening devices it is equipped with) safely to the determined location. In the case of autonomous drones, many of them utilize GPS technology and tracking to allow operators to plot the general path of the drone’s flight. As the drone is operating autonomously, the exact flight pattern and maneuvers would be left to the artificial intelligence.
For more on how machine vision systems are trained for self-driving cars, read our section on the topic on our report on crowdsourced machine vision training.
We’ll begin our analysis of autonomous drones and the machine vision technology behind them with AeroVironment’s Raven series of UAVs:
Autonomous Drones Available to the Military
AeroVironment
AeroVironment offers a series of Unmanned Aerial Vehicles (UAVs), which the company claims can fly along a route using computer vision and GPS coordination. The UAV’s GPS controller can map the drone’s routes. AeroVironment claims its Raven series of UAVs is the most widely used UAV in the world.
The below photo shows an Army sergeant launching a drone in the Raven series:
An army seargent launches the RQ-11 Raven, courtesy of Fire Aviation
Most of AeroVironment’s line of UAV products, including the Raven series, is designed to be operated and used in combat situations by a single person in the field. According to the company, military personnel in the field can assemble the drone in 5 minutes and toss it into the air to deploy it, as shown in the image above. Traditionally, teams of military operators work together on UAV missions; some teams maintain the drone, while others pilot or plot the drone’s flight course.
Such a UAV could give military troops the capability to collect data that could allow them to move faster and check if they are being pursued during tactical reconnaissance, tracking, combat assessment, and cartographic missions. Drones could allow operators to make decisions without being concerned that they might be ambushed from the rear, for example.
The US military started using the Raven series in 2004, and it is currently in the field today. The US Army, Navy, Air Force, and Marine Corp all use The Raven series. The Royal Canadian Navy has also recently ordered AeroVironment’s Puma series of UAV for maritime use.
Andrew Browning is Program Director of Autonomy at AeroVironment. He holds a PhD in Statistical and Computational Monitoring from Boston University Previously, Browning served as an autonomous technologies consultant.
Lockheed Martin
We covered Lockheed Martin’s numerous AI endeavors in our report on Lockheed Martin’s AI Applications for the Military – An Overview.
Lockheed Martin offers the Desert Hawk III, a drone developed for use in the field by a single operator, similar to Aerovironment’s Raven series. Lockheed Martin’s Desert Hawk III can purportedly plan its flight path ahead of time and can train a new operator on how to operate and use it in the field. It is unclear what this training looks like or how effective it might be.
Below is a short 2-minute video demonstrating the launch of the Desert Hawk III:
Lockheed Martin was awarded a contract with the United Kingdom’s Ministry of Defense during the War in Afghanistan. The Desert Hawk III is currently under “Full Operating Capacity” status with the United Kingdom’s Ministry of Defense.
UK Defence Minister Philip Dunne said, “Desert Hawk has proved its worth on operations in Afghanistan, providing our Armed Forces with vital intelligence and allowing our commanders to stay one step ahead of the enemy.”
Sheild.AI
Sheild.AI offers Nova, an autonomous quadrotor drone that runs the company’s Hivemind software. The quadrotor drone design uses four propeller turbines to achieve thrust as opposed to the fixed wing design of the larger UAV’s commonly used in the military.
Sheild.AI claims Hivemind can help military operators use drones to explore GPS-denied areas, such as the interior of buildings and caves, to gather mission intelligence.
Below is a short 2-minute video demonstrating how Nova flies and the display system Nova shares with its operator:
We could not determine if any military uses Hivemind yet, but Shield.AI has raised $13.1 million in venture funding from Andreessen Horowitz and Bloomberg Beta.
Nathan Michael is CTO at Shield.AI. He holds a PhD in Mechanical Engineering from the University of Pennsylvania. Previously, Michael served as an associate research professor at Carnegie Mellon. Nathan Michael his authored 140 publications on control, perception, and cognition for robotic systems.
Machine Vision and Image Recognition for Autonomous Drones and UAVs
Neurala Brain
Neurala offers Neurala Brain, a namesake software which it claims helps drones equipped with cameras recognize and identify predetermined targets and then alert a human operator. Neurala claims it’s software can comb through crowds to find and identify persons of interest. It can also inspect large industrial equipment, like telephone towers, and generate a real-time damage report.
Below is a short 1-minute video demonstrating how Neurala Brain could find a missing person in a crowd:
The company states the machine learning model behind the software was trained on thousands of labeled images, such as photos of people in a crowd. These labeled images would then be run through the software’s machine learning algorithm. This would have trained the algorithm to discern the sequences and patterns of 1’s and 0’s that, to the human eye, form the image of the content its been trained on, such as a person amongst a crowd.
This also likely requires facial recognition, which comes with an additional layer of training. Photos of specific people, likely taken from social media or government databases, would need to be run through the machine learning algorithm as well. This would essentially train the software to identify persons of interest amongst a crowd.
When equipped to a military drone, the software could identify a specific target in the field. The system then alerts a human operator that the target may have been located.
Neurala claims their software uses fewer data and needs less time to train, “requiring only 20% of images per class typically needed and 20 minutes compared to days.” Image recognition training typically requires great quantities of labeled image or video data. It’s unclear how accurate their software could be if it is trained on fewer images.
Neurala lists Motorola as one of their partners for the creation of the Neurala Brain.
Anatoli Gorchet is a co-founder and CTO at Neurala. He holds a PhD in cognitive and neural systems from Boston University. Previously, Gorchet served as research professor at Boston University. We’ve previously interviewed Neurala’s CEO Massimilano Versace on the topic of machine vision in heavy industry.
B-AIM