A team of five undergraduate seniors spent the spring 2021 semester finishing work on a design project that integrated robotics, haptic feedback, and augmented reality. At the conclusion of the project, they collected their findings for submission, and that paper was published in the October 2021 edition of the Institute of Electrical and Electronics Engineers (IEEE) Robotics and Automation Letters.

Human teaching, robotic learning

Machine learning has been an area of rapid growth over the past decade, fueled by a rising use in everything from Netflix’s movie recommendations to self-driving cars. Machine learning doesn’t happen in a vacuum; it is powered by feeding data into algorithms. A computer receives feedback that allows it to verify whether or not its actions were correct, and those actions can be amended over time. The “learning” is the result of trial-and-error, keeping actions that work and stopping an action that doesn’t.

Actions can take on different forms. In the case of Netflix’s recommendation algorithm, a user’s viewing history and ratings are analyzed to push similar programming. For the students in this project, the action was physical: a robot was taught to do a task. Movements were programmed and recorded by human operators who corrected errors as wrong actions were seen. By fine-tuning the actions, the robot arrived at a correct behavior.

Passive displays and active prompts

Much of machine learning happens with similar human interactions, and enables intelligent systems to learn from humans. But how does the human know what their robot has learned? Think back to Netflix’s recommendation algorithm --- if you indicate that you don’t like a scary film, does the system learn that you don’t like that specific film, or does it think that you dislike the entire horror genre?

The senior design team focused on the problem of communicating robot learning back to nearby humans. To communicate this learning the robot needed to provide feedback, and a key question is how the robot should provide this feedback. The students implemented four different alternatives: showing the robot’s learning on a computer screen, notifying the human with a haptic wristband, displaying the robot’s plans in augmented reality, or a combination of the above. Overall, the purpose of this feedback was to enable the human and computer to learn from one another. The robot arm learns from the human how to perform complex motions, and the human learns from the robot’s feedback when the robot understands the task, and when the robot needs more help.

As a robot learns, its data is processed internally. In the case of the student experiment, an object is picked up. Through programming, the robot has been told where to move that object, where to put it down, the force needed to hold it, and many other inputs. Putting these commands together accomplishes the goal of completing a task, but there are greater possibilities in play. A robot might be calculating distance, force, obstacles – all things that could influence a different decision – and all of that data could be valuable to a human user as well for fine-tuning the input.

The student team took on the challenge of relaying that data to a human user and creating tools that visualize additional options for new actions. These feedback mechanisms included augmented reality displays and wearable haptic feedback devices. The combination of tools and feedback created a more complete view of new actions that the machine might choose to take, and relay those options to a human user for fine-tuning the options.

The team finalized their results and submitted their paper during Spring commencement. IEEE picked the project for publication and published it in the September 2021 edition of Robotics and Automation.

Assistant Professor Dylan Losey, who advised the group, commented on their accomplishments.

“These students went above and beyond my expectations,” he said. “I’ve never seen a team of undergraduates perform and publish research at this level. At the start of this project so many of the core concepts were new to them --- but by the end, they were teaching me about their haptic devices and feedback algorithms! The students found something that they were passionate about, and that passion and their hard work led to an amazing result. Now the scientific community can benefit from their findings.”

Students contributing to the article included James F. Mullen, Josh Mosier, Sounak Chakrabarti, Anqi Chen, and Tyler White.