Why reliable positioning in- & outdoors requires cameras
Challenges of self-localization in mixed in- and outdoor environments
Knowing its own location or position within an operating environment (a.k.a. self-localization) is one of the key capabilities any mobile robot must master to be able to get from A to B. Yet, depending on the environment the robot is operating in, self-localization can impose vastly different challenges, rendering the choice of sensor technology and software algorithms far from a trivial task.
Indoors, 2D LiDAR technology is widely used, as an example, on AGVs and AMRs for material handling operations. Such a technology allows to measure the geometric structure of the near-by environment (e.g. walls, shelves) to localize in pre-built maps. However, it shows its limitations in environments which do not have unique geometric features (e.g. a long corridor) or are highly dynamic especially if shared with people.
On autonomous vehicles for outdoor environments, instead, satellite navigation systems such as GPS or Galileo are often the first choice of sensor technology for self-localization, for good reasons: The GPS receivers are very cheap, and they yield an easily interpretable position in a known global reference coordinate system (e.g., WGS84).
However, an accurate GPS position requires unobstructed skies. This is a limitation that is violated in several circumstances like urban surroundings, company campuses, logistic yards and airport tarmac. In fact, these outdoor places often require to operate close to or even inside buildings or tunnels. Due to these restrictions, mobile robots operating outdoors or in mixed indoor-outdoor settings cannot rely solely on GPS for self-localization.
Even though 2D LiDAR sensors can be employed outdoors too, using them in the open only works in well-structured spaces of limited extent or requires additional infrastructure such as laser reflectors in order to work better., or to a better extent thanks to the installation of additional infrastructure such as laser reflectors. 3D LiDARs, on the other hand, remain a very expensive sensor gear, despite all efforts in cutting down the costs in recent years.
The strength and challenges of visual localization
As an alternative to GPS and LiDAR sensors, cameras are able to reliably sense the environment both indoors and outdoors. Vision thus has the capability to bridge this gap left uncovered by GPS or LIDAR sensors. It offers highly accurate and reliable self-localization in both indoor and outdoor environments with a very cost effective sensor gear.
Visual self-localization does not come without its very own challenges. Cameras naturally provide good images in well illuminated surroundings, while during night time outdoors, or in dim indoor areas, visual self-localization becomes more difficult. However, the selection of camera sensor chips with a high dynamic range has shown to allow proper images to be taken even under challenging illumination conditions.
"Instead, LiDAR sensors can be employed. However, using 2D LiDARs outdoors only works in well-structured spaces of limited extent. As an alternative to GPS and LiDAR sensors, cameras are able to reliably sense the environment both indoors and outdoors."
Further, and in contrast to GPS or LiDAR sensor data, images taken by cameras of the same environment can look vastly different under varying seasonal, weather, or lighting conditions. This change in appearance renders accurate localization difficult. Additionally, geometric characteristics are often unfavorable for visual localization outdoors compared to indoors, as there is often less stable, human-made structure visible, plus it is often located at a considerably farther distance. Therefore, smart algorithms to work with visual data are paramount.
Tackling the challenges of visual localization with AI
To master these challenges, Sevensense employs a smart AI algorithm based on lifelong visual mapping that enables building and maintaining always-up-to-date maps of the environment. These maps incorporate data from multiple appearances of the environment, such as before- and after remodeling of a room, during cloudy- and sunny weather, or bright and dark lighting conditions. This allows the maps to be used for visual localization under practically any appearance condition.
At the core of this lifelong mapping process is a software algorithm capable of merging visual data from multiple traversals through the same environment, recorded under potentially very different appearance conditions, into a single, geometrically consistent map. This process is fully automated, resulting in having a multi-appearance map ready for reliable localization after traversing through the environment only a handful of times beforehand.
Further, continuously updating this map over time naturally captures structural change in the environment. This way events like remodeling of rooms, construction sites or temporary modifications in street or room layouts don’t disrupt everyday operations. These changes are detected automatically and swiftly incorporated into the lifelong map. In particular, no expert knowledge or intervention is necessary at any point in the process. Just as generating the initial map of the environment can be done by anyone without any prior training or expert know-how, updating the maps at any later point is done automatically by Alphasense Position. This considerably lowers the costs of maintaining the system in a long-term horizon.
Vision is the optimal solution for indoor- and outdoor self-localization
By employing this lifelong mapping technique, vision overcomes its limitations and turns out to be the go-to technology for cost-effective, but still reliable and accurate self-localization both indoors and outdoors. In particular, it offers seamless transition from indoor to outdoors and vice-versa, as the same camera sensors and the same map can be used to localize in both types of environments. This is a fundamental advantage over any technology primarily relying on GPS for outdoor localization and navigation that can revolutionize the logistics, delivery, cleaning or supply chain processes.
"By employing this lifelong mapping technique, vision turns out to be the go-to technology for cost-effective, but still reliable and accurate self-localization both indoors and outdoors."
Sevensense has already successfully demonstrated the unique benefits of visual localization in a proof-of-concept study in partnership with KYBURZ Switzerland AG. By equipping a mail delivery robot with Alphasense Position, accurate and reliable self-localization of the vehicle became possible at any time indoors, as well as outdoors, where LiDAR and GPS solutions alone failed.
"By equipping a mail delivery robot with Alphasense Position, accurate and reliable self-localization of the vehicle became possible at any time indoors, as well as outdoors, where LiDAR and GPS solutions failed."
Along a similar vein, the indoor- and outdoor capabilities of Alphasense Position will open up countless other opportunities for bringing cost-effective autonomy to robotic platforms operating in challenging environments. Delivering goods across multiple buildings on a campus, or material handling in warehouses undergoing frequent layout change have never been simpler before.
With vision towards a fully autonomous future
Self-localization is essential for robots to safely navigate in changing environments and achieve full autonomy. By employing cameras for this task together with smart algorithms for building and maintaining maps, autonomy can be provided at a fraction of the costs - be it for the sensor gear itself, as well as operating costs over the robot’s lifetime. At Sevensense we are constantly pushing cutting edge technologies forward to deliver affordable autonomy solutions for new application fields.
"At Sevensense we are constantly pushing cutting edge technologies forward to deliver affordable autonomy solutions for new application fields."