industry news
Subscribe Now

IDTechEx Discusses How Sensors Are Used to Achieve Safety and Accurate Control in the 21st-Century Robotics

With an increasing demand for automation in the 21st century, robots have had rapid and unprecedented growth across many industries, including logistics, warehousing, manufacturing, and food delivery.

Human-robot interaction (HRI), precise control, and safe collaboration between humans and robots are the cornerstones of adopting automation. Safety refers to multiple tasks in the context of robots, with collision detection, obstacle avoidance, navigation and localization, force detection, and proximity detection being a few examples. All these tasks are enabled by a suite of sensors, including LiDAR, imaging/vision sensors (cameras), tactile sensors, and ultrasonic sensors. With the advancement of machine vision technology, cameras are becoming increasingly important in robots.

Working Principle of Sensors in Robotics – Vision Sensors/Cameras

CCD (charge-coupled device) and CMOS (complementary metal oxide semiconductor) are common types of vision sensors. A CMOS sensor is a digital device that converts the charge of each pixel to its corresponding voltage, and the sensor typically includes amplifiers, noise correction, and digitalization circuits. On the contrary, a CCD sensor is an analog device that contains an array of photosensitive sites. Although each has its strengths, with the development of CMOS technology, CMOS sensors are now widely considered an appropriate fit for machine vision in robots thanks to their smaller footprint, lower cost, and lower power consumption compared with CCD sensors. Vision sensors can be used for motion and distance estimation, object identification, and localization. The benefit of vision sensors is that they can collect significantly more information with high resolution compared with other sensors such as LiDAR and ultrasonic sensors. The diagram below compares different sensors based on nine benchmarks. Vision sensors have high resolution and low costs. However, they are inherently susceptible to adverse weather and lightness; therefore, other sensors are often needed to increase the overall system robustness when robots work in unpredictable weather or difficult terrain. A more detailed analysis and comparison of these benchmarks are included in IDTechEx’s latest report, “Sensors for Robotics 2023-2043: Technologies, Markets, and Forecasts”.


How Are the Vision Sensors Used for Safety in Mobile Robots?

Mobile robotics is one of the largest robotic applications where cameras are used for object classification, safety, and navigation. Mobile robots primarily refer to automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). However, autonomous mobility also plays an important role in many robots ranging from food delivery robots to autonomous agricultural robots (e.g., mowers, etc.) rely on autonomous mobility. Autonomous mobility is an inherently complicated task requiring obstacle avoidance and collision detection.

Depth estimation is one of the key steps in obstacle avoidance. The task requires one or multiple input RGB images collected from vision sensors. These images are used to reconstruct a 3D point cloud with machine vision algorithms, thereby estimating the depth between the obstacle and the robot. At this stage (2023), the majority of mobile robots (e.g., AGVs, AMRs, food delivery robots, robotic vacuum, etc.) are still used indoors, such as in warehouses, factories, shopping malls, and restaurants where the environment is well-controlled with a stable internet connection and illumination. Therefore, cameras can achieve their best performance, and machine vision tasks can be performed on the cloud, significantly reducing the computational power required for the robot itself, thereby leading to a lower cost. For example, cameras are only needed to monitor the magnetic tape or QR code on the floor for grid-based AGVs. While this has been widely used and trendy nowadays, this does not work well for outdoor side-walk robots or inspection robots that work in areas with limited Wi-Fi coverage (e.g., under tree canopies, etc.). To solve this problem, the in-camera computer vision technique is emerging these days. As the name indicates, the image processing is all finished within the cameras. Due to the increasing demand for outdoor robots, IDTechEx believes that in-camera computer vision will be increasingly needed in the long term, especially for those designed to work in difficult terrain and harsh environments (e.g., exploration robots, etc.). However, in the short term, IDTechEx believes that the power consumption nature of onboard computer vision, along with the high costs of chips, will likely hold back the adoption. IDTechEx believes that many robot original equipment manufacturers (OEMs) would prefer to incorporate other sensors (e.g., ultrasonic sensors, LiDAR, etc.) as the first step to enhance the safety and robustness of the environment perception ability of their products.

More detailed analysis of the trend and how different sensors are used together are included in IDTechEx’s latest report, “Sensors for Robotics 2023-2043: Technologies, Markets, and Forecasts”.

About IDTechEx

IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact or visit

Leave a Reply

featured blogs
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Autonomous Mobile Robots
Sponsored by Mouser Electronics and onsemi
Robotic applications are now commonplace in a variety of segments in society and are growing in number each day. In this episode of Chalk Talk, Amelia Dalton and Alessandro Maggioni from onsemi discuss the details, functions, and benefits of autonomous mobile robots. They also examine the performance parameters of these kinds of robotic designs, the five main subsystems included in autonomous mobile robots, and how onsemi is furthering innovation in this arena.
Jan 24, 2024