Automated Control Systems on the Mars Rover

Nasa’s Perseverance rover completed its seven-month journey on 18th February (travelling roughly 5 x 10^7 km), and successfully landed at the Jezero Crater, the geologically rich area that could hold momentous biological clues for ancient life on this mysterious, red planet.

 As shocking as it may sound – landing on this rock-strewn body is not an easy feat.

The Entry, Descent, & Landing (EDL) phase, commonly known as the “seven minutes of terror”, is by-far the most demanding phase of the mission – where the spacecraft enters the thin Martian atmosphere at nearly 20,000 km/h, and in a mere 7 minutes (compared to the previous 7 months), it must decelerate to a standstill on the Martian surface, without imploding on impact. Even if the rover manages to survive the myriad of complications of the complex EDL choreography & does manage to be part of the 40% of successful missions to Mars; what is more is that it must complete the entire mission autonomously, without any assistance from its humanly companions at Mission Control. The 11-minute radio signal delay means that during the landing, Perseverance cannot be controlled by the mission team, and therefore must complete the entire EDL process by itself. With the help of several instruments and the avant-garde Terrain-Relative Navigation system, the rover must avoid the innumerable hazards ready to strike Perseverance on the complex Martian terrain, such as steep cliffs, sand dunes, boulder fields and smaller impact craters – without a doubt, the most challenging terrain ever targeted!

With the aim of reaching and landing on Mars in the boundless universe, safely – all solutions point to the autonomous navigation technology that single-handedly controls the EDL for the rover. This enormous amount of control taken by the rover’s own intelligent systems is the only way to land on Mars, simply because the mission team cannot “joystick” the rover here. With the use of numerous sensors to visually percept and represent the spatial locations of the aerospace prober, the rover must amalgamate many aerospace navigation techniques, namely the satellite navigation, the integrated navigation, the visual navigation, the celestial navigation and the inertial navigation, into its descent.

Inertial navigation is a commonly used method in the vast navigation
research field, because of its four main advantages compared to traditional
techniques. Firstly, this method is able to provide the position, altitude
& velocity of the rover accurately and with high precision, because of the incorporated
devices within the navigation system, such as the piezoelectric accelerometer,
the optical accelerometer, and the fibre optic gyroscope which can measure the
rotation rate of the rover. Secondly, it is independent to the environment by understanding
all time and weather modes regardless of the environment; thirdly, it has an
exceptionally high data updating ratio; and finally, it is able to perform
efficiently due to an anti-interference system.

Not only is this spacecraft armed with the knowledge of its velocity, speed, position and course; but it must also change these conditions with the use of control devices, such as the reaction control system thrusters, which are vital to change the speed of the spacecraft’s rotational motion in space. They are designed to make swift turns as well as get use to the new orientations rapidly – reducing the chance of a crash. Additionally, the thrusters come in pairs thus, the spacecraft can be spun around without being given any lateral velocity.

During the EDL phase, the spacecraft descends on a supersonic parachute, as well as firing small thrusters on its backshell and then lowers the rover down to the surface using strong tethers, similar to a sky crane. These features work in collaboration with a new EDL technology system called Range Trigger, which can calculate the rover’s distance to the landing point and open the 22-metre wide parachute at the ideal opportunity to enable a slow, safe landing.

NASA has developed another autonomous, vision-based control system for obstacle avoidance and spacecraft position estimation – called Terrain Relative Navigation (TRN). During the Apollo moon landings, astronauts identified landmarks and looked out of their windows for boulders and craters in order to land safely. Technology has advanced since then, and the revolutionary TRN control system uses sensors, algorithms, and onboard computing (the Vision Compute Element) in order to outshine human intelligence to enable safe, autonomous landings. Mars has a rough, threatening terrain meaning that the TRN will provide a map-relative position to precisely locate a landing point and mitigate any potential landing hazards.

This ‘autopilot’ system can quickly figure out the spacecraft’s location over the surface and then uses a radar to continuously ping the ground. This radar works with a component of the TRN called the Lander Vision System, whose job is to handle the various possible terrain conditions and control the landing at high speeds. The LVS uses its downward-facing camera to take multiple photos of the ground below, and then sends these signals to the onboard computer to process the photos and determine the potential hazards at the specific location of the rover.

Once successfully landed on the planet, to ensure that the rover does not inadvertently crash into obstacles while driving, NASA engineers have developed an innovative software to enable the rover to control its own safety and ultimately be able to “think for itself”, similar to a human’s capabilities. By means of Hazcam images (short for Hazard avoidance cameras), taken by cameras directly mounted to the rover’s body, the rover is able to map out the rocky terrain 3 metres ahead of it. The rover hazard avoidance software self-assesses the rover’s position, by stopping approximately every 10 seconds, re-evaluating the situation and subsequently, computing the following manoeuvre for the next 45 seconds, before beginning its journey once again. Although seeming to be a tedious process, this software controls the rover, enabling it to travel safely at an average of 30 centimetres before re-examining its ever-changing surroundings.

Altogether, these control systems work together, without any input from humans, to pick the safest reachable landing site, land safely and explore the surface on Mars, aiding the rover’s search for life on Mars.

Watch the Mars Perseverance Rover land:

Did you know?

The planet that can come closest to the Earth ever is Venus; second closest is Mars; BUT the planet that spends most of its time closer to the Earth is in fact MECURY! Mercury is the closest planet to Earth 46% of the time, and Venus is closest to us 36% of the time, and Mars is closest to us ONLY 18% of the time!

References:

How did you find this article?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.
error: Sorry, content is protected! For special access, please email contact@bytesofintelligence.co.uk. Thank you.