top of page

What’s Next For Robotics: In The Field, Inferencing On The Edge


Robots are a key application for AI and in addition to an excellent plenary talk by Julie Shah of MIT, a whole track was dedicated to AI in robotics applications. Dan Kara, VP of robotics and intelligent systems for WTWH Media, outlined some of the challenges in building robots—not chatbots, he clarified, but robots that act in the physical world. “It seems like every year it’s just around the corner,” he said, but this year the tailwinds are picking up.

Robotics is the foundation for much of our work thus far in artificial intelligence and machine learning, Kara argued. “It’s only been fairly recently that you’ve started getting artificial intelligence or machine learning moving off into different labs,” he said. “At one time, they were considered the same thing because that’s where the work was going.” Early research in facial recognition, accelerometers, natural language processing, very small cameras, and more “came out of work done in robotics labs. They’re naturally synergistic,” he said.

In the past decade, AI and machine learning have exploded, those advances in cognitive capabilities, IoT, data at scale, and ubiquitous connectivity are funneling back into robotics, Kara said, creating pathways for robots that “think, sense, and act.”

We are seeing a shift in robotics over the past five years from an emphasis on hardware to an emphasis on software. “If you ask, for example, the people who run iRobot, most of their engineers—over three-quarters of them—are software engineers. You’re seeing more and more and more of that occur throughout the world in the leading robotics centers.” In journal articles, we see the same trend: an explosion in work dedicated to the intersection of machine learning and robotics, he observed.

Dan Kara, VP of Robotics and Intelligent Systems, WTWH Media

Robots in the physical world that are doing tasks such as grasping, manipulation, autonomous navigation and localization need reinforcement learning, Kara said, as opposed to supervised, unsupervised, or semi-supervised learning that may be more appropriate for other applications. “There’s a greater emphasis on learning that is happening in real time and the software that’s needed to support that,” Kara explained.

In fact, Kara explained, the de facto standard is becoming Robot Operating Systems (ROS) linking to cloud services. Amazon launched AWS RoboMaker last November: a cloud extension for the ROS with a development environment, simulation tools, and fleet management. RoboMaker offer extensions from the ROS into Amazon’s backend service packages, for example, Amazon Polly for language generation. These sorts of packages could easily include facial recognition and object recognition, but they haven’t yet ventured into manipulation and grasping or navigation. “I suspect that will be coming for this particular product,” Kara predicted.

Amazon isn’t alone. Through Microsoft’s Visual Studio you can access ROS nodes to get capabilities, while feeding off Azure for natural language processing, object recognition, fleet management and other tools. Facebook has a product cloud called SciRobotics, which they developed with Carnegie Mellon University, that is home to these packages for ROS use. Google has also teased a product: the Google Cloud Robotics Platform.

A Model of Programmed Teamwork

“We’ve moved from an era where we’re focusing on single robotic systems for use to now multi-robot systems and how they work together in operation,” Kara said.

While it may look like a fleet of warehouse robots are cooperating in an intricate dance, we actually aren’t there yet. It’s all just obstacle avoidance, explained Michael Franklin, assistant professor, College of Computing, Kennesaw State University. There is no communication at all about what the obstacles are, he pointed out. While the robots are programmed not to run into each other, they have no idea what they are avoiding, and they are not working in concert.

Multi-agent, multi-team situations are complicated, Franklin argued—with a quick history lesson on the Battle of Waterloo as evidence.

AI—in robots or elsewhere—does not strategize, Franklin said. It is reactive and always maximizes its own mission. We haven’t yet built AI that understands teamwork, he argued. In teams, individuals sacrifice themselves for the greater goal.

He proposed a hierarchical agent-based model, with intelligent agents at the edge of the field. Each agent has access to knowledge gleaned from teammates; above that there are policies in place for the task; above that an overarching strategy, and finally, intelligence. But reasoning doesn’t only move top-down. Each agent is an intelligent actor that feeds data back up the model. Even if communications are cut off, the edge agents can carry on with the last, best data.

It’s a vision we are still some distance from realizing, but Kara agrees that intelligence on the edge is the future of robotics research and development. Robots—as highly-sensored physical devices—are acting as edge hubs collecting feedback from other sensors, consolidating that data, and sending it on.

“If you talk to some of the leaders in the cloud/AI or cloud/machine learning infrastructure players—the Googles of the world or the Microsofts of the world—they consider robotic systems to be just hyper-sensored, hyper-intelligent edge devices,” Kara said.

There’s an emphasis now on edge inferencing, and Google is researching federated learning. “It all ties into this notion that we want to actually have the inferencing not done in the cloud, but in fact done on the device itself. You see specialized processors coming from Google and from NVIDIA and QUALCOMM and a variety of other players to emphasize that,” he said.

This will be all the more useful as robots emerge from cages in warehouses. “What about the other 99% of the world outside of buildings, where you’re dealing with sparse data, much more experience driven, combining different types of modalities?” Kara asked.

While most of the emphasis thus far has been on business intelligence, post facto, field robotics will require different types of learning and intelligence. “The types of systems that will add the most value are ones that can reduce the time and response between when something happens and the time to react.” Field robotics is where the big investments are, he observed.

New models of learning—federated learning, emergent learning and continual learning—will be needed to train systems that exist in the field and new classes of hardware and software will be needed to support inferencing on the edge.

Post: Blog2_Post
bottom of page