ADVERTISEMENT

Indian-origin researcher leads study to accelerate robot learning

This research, partially funded by the MIT-IBM Watson AI Lab, was presented at the Conference on Neural Information Processing Systems.

Representative Image / Unsplash

To improve the training of AI agents, scientists from MIT, Harvard University, and the University of Washington developed a novel technique known as Human Guided Exploration (HuGE). 

Under the direction of MIT Department of Electrical Engineering and Computer Science assistant professor Pulkit Agrawal, the research demonstrated a scalable and effective method for robotic learning. This would potentially revolutionize the way robots acquire new skills, a release stated.

Unlike traditional reinforcement learning that heavily relies on expertly designed reward functions, HuGE would leverage crowdsourced feedback from nonexpert users worldwide to guide their AI agents in learning complex tasks.  Agrawal, who led the Improbable AI Lab in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emphasized the current challenges in designing reward functions. 

“One of the most time-consuming and challenging parts in designing a robotic agent today is engineering the reward function. Today reward functions are designed by expert researchers — a paradigm that is not scalable if we want to teach our robots many different tasks. Our work proposes a way to scale robot learning by crowdsourcing the design of reward function and by making it possible for nonexperts to provide useful feedback,” he said.

The approach has the potential to profoundly influence the domain by enabling robots to acquire knowledge of particular tasks within the residences of their users. In the future, according to Agrawal's vision, robots would be able to explore and learn on their own, with the help of crowdsourced feedback from people who are not experts in robotics.

According to the release, researchers decoupled the learning process into two parts: a continuously updated goal selector algorithm that incorporates crowdsourced feedback, and the AI agent exploring autonomously, guided by the goal selector. This dual approach ensured that the agent could continue learning, even in the absence of immediate or accurate feedback.

Real-world tests involved training robotic arms to draw the letter "U" and pick and place objects, utilizing crowdsourced data from 109 nonexpert users in 13 different countries.
The results showed that HuGE outperformed other methods, enabling faster learning in both simulated and real-world experiments. Additionally, data crowdsourced  from nonexperts proved more effective than synthetic data produced and labeled by researchers. 
 

Comments

ADVERTISEMENT

 

 

 

ADVERTISEMENT

 

 

E Paper

 

 

 

Video