AI model masters new terrain at NASA facility one scoop at a time
en-GBde-DEes-ESfr-FR

AI model masters new terrain at NASA facility one scoop at a time


Extraterrestrial landers sent to gather samples from the surface of distant moons and planets have limited time and battery power to complete their mission. Aerospace and computer science engineering researchers in The Grainger College of Engineering, University of Illinois Urbana-Champaign trained a model to autonomously assess and scoop quickly, then watched it demonstrate its skill on a robot at a NASA facility.

Aerospace Ph.D. student Pranay Thangeda said they trained their robotic lander arm to collect scooping data on a variety of materials, from sand to rocks, resulting in a database of 6,700 points of knowledge. The two terrains in NASA’s Ocean World Lander Autonomy Testbed at the Jet Propulsion Laboratory were brand new to the model that operated the JPL robotic arm remotely.

“We just had a network link over the internet,” Thangeda said. “I connected to the test bed at JPL and got an image from their robotic arm’s camera. I ran it through my model in real time. The model chose to start with the rock-like material and learned on its first try that it was an unscoopable material.”

Based on what it learned from the image and that first attempt, the robotic arm moved to another more likely area and successfully scooped the other terrain, a finer grain material. Because one of the mission requirements is that the robot scoop a specific volume of material, the JPL team measured the volume of each scoop until the robot accomplished scooping the full amount.

Watch a video of the demonstration at NASA's JPL. https://youtu.be/B9t6Cr_UW-c?si=oMR0LhTsDYuyUrzu

Thangeda said that although this work was originally motivated by exploration of ocean worlds, their model can be used on any surface.

“Usually, when you train models based on data, they only work on the same data distribution. The beauty of our method is that we didn't have to change anything to work on NASA’s test bed because in our method, we are adapting online. Even though we never saw any of the terrains at the NASA test bed, without any fine tuning on their data, we managed to deploy the model trained here directly over there, and the model deployment happened remotely—exactly what autonomous robot landers will do when deployed on a new surface in space.”

Thangeda’s adviser Melkior Ornik is the lead on one of four projects awarded a grant under NASA COLDTech program but all four were to solve different problems. The only commonality between them is they all are a part of the Europa program and use this Lander as a test bed to explore different problems.

“We were one of the first to demonstrate something meaningful on their platform designed to mimic a Europa surface. It was great to finally see something you worked on for months being deployed on a real, high-fidelity platform. It was cool to see the model being tested on a completely different terrain and a completely different platform robot that we’d never trained on. It was a boost of confidence in our model and our approach.”

Thangeda said the feedback they received from the JPL team was good, too. “They were happy that we were able to deploy the model without a lot of changes. There were some issues when we were just starting out, but I learned it was because we were the first to try to deploy a model on their platform, so it was network issues and some simple bugs in the software that they had to fix. Once we got it working, people were surprised that it was able to learn within like one or two samples. Some even didn't believe it until they were shown the exact results and methodology.”

Thangeda said one of the significant issues he and his team had to overcome was to bring their setup on parity with NASA’s setup.

“Our model was trained on a camera in a particular location with a particular shaped scoop. The location and the shape of the scoop were two things we had to address. To make sure their robot had the exact same scoop shape, we sent them a CAD design and they 3D printed it and attached it to their robot.

“For the camera, we took their RGB-D point cloud information and reprojected it in real time to a different viewpoint, so that it matched what we had in our robot before we sent it to the model. That way, what the model saw was a similar viewpoint to what it saw during training.”

Thangeda said they plan to build on this research for more autonomous excavation and automating construction work like digging a canal. It's much easier for humans to do these things. It’s hard for a model to learn to do these things autonomously, because the interactions are very nuanced.

The study, “Learning and Autonomy for Extraterrestrial Terrain Sampling: An Experience Report from OWLAT Deployment,” was written by Pranay Thangeda, Yifan Zhu, Kris Hauser and Melkior Ornik from U. of I; Ashish Goel, Erica L. Tevere, Adriana Daca, Hari D. Nayar and Erik Kramer from NASA JPL. It is published in the AIAA Scitech Forum. DOI: 10.2514/6.2024-1962

Kris Hauser is in the Seibel School of Computing and Data Science at Illinois. Hauser’s former graduate student Yifan Zhu earned his Ph.D. in ’23.
The study, “Learning and Autonomy for Extraterrestrial Terrain Sampling: An Experience Report from OWLAT Deployment,” was written by Pranay Thangeda, Yifan Zhu, Kris Hauser and Melkior Ornik from U. of I; Ashish Goel, Erica L. Tevere, Adriana Daca, Hari D. Nayar and Erik Kramer from NASA JPL. It is published in the AIAA Scitech Forum. DOI: 10.2514/6.2024-1962
Fichiers joints
  • A snapshot of policy's scooping preferences during testing on NASA Ocean World Lander Autonomy Testbed at the Jet Propulsion Laboratory.
  • Melkior Ornik and his Ph.D. student Pranay Thangeda
  • "This demonstration shows our adaptive scooping model operating on the NASA OWLAT Testbed. The model was trained at U. of I. using 6,700+ terrain scooping samples and designed to adapt online to unknown terrains with minimal demonstrations. The video shows the policy autonomously learning from an initial failed attempt and adjusting its scooping locations in unfamiliar terrain. The color overlay visualizes the policy's real-time assessment of optimal scooping locations," said Melkior Ornik's Ph.D. student Pranay Thangeda.
Regions: North America, United States
Keywords: Applied science, Artificial Intelligence, Computing, Engineering, Public Dialogue - applied science, Science, Space Science

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement