(ORDO NEWS) — NASA engineers train instruments to use objects on the Moon‘s horizon to navigate the lunar surface.
“For safety and scientific geotagging, it’s important for researchers to know exactly where they are when exploring the lunar landscape,” said Alvin Yu, research engineer at NASA’s Goddard Space Flight Center.
“Equipment of an on-board unit with a local map would support any mission, be it robotic or human.”
NASA is currently working on developing a communication and navigation architecture for missions to the Moon. LunaNet will provide “Internet-like” experiences, including location services.
However, researchers in some areas of the Moon may need backup systems to ensure safety in the event of a lack of communication signals.
Yu started with data from NASA’s Lunar Reconnaissance Orbiter mission, specifically the Lunar Orbiter Laser Altimeter (LOLA). LOLA measures the slopes and irregularities of the lunar surface and creates high-resolution topographic maps of the moon.
Yu is training artificial intelligence to recreate objects on the lunar horizon as they might appear to a researcher on the surface of the moon using LOLA’s digital elevation models.
These digital panoramas can be used to match known boulders and mountain ranges with those seen in images taken by a rover or astronaut, providing accurate location identification for any given region.
“It’s like going outside and trying to figure out where you are by looking at the horizon and surrounding landmarks,” Yu said.
By leveraging LOLA data, a handheld device can be programmed with a local subset of elevation and elevation data to conserve memory.
Yu’s geolocation system will use the capabilities of GIANT (Goddard Image Analysis and Navigation Tool).
This optical navigation instrument, developed principally by Goddard engineer Andrew Liunis, previously cross-checked and validated navigation data for NASA’s OSIRIS-REx sample collection mission from the asteroid Bennu.
Unlike radar or laser rangefinders, which aim radio signals and light at a target to analyze returning signals, GIANT quickly and accurately analyzes images to measure the distance to and between visible landmarks.
The portable version is cGIANT, a derivative library of Goddard’s Autonomous Navigation Guidance and Control System (autoGNC), which provides autonomous solutions for all phases of spacecraft and rover operations.
Combining artificial intelligence interpretations of visual panoramas with a known lunar terrain model could be a powerful navigational tool for future explorers.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.