Computer vision technology is increasingly used in areas such as automatic monitoring systems, self-propelled cars, facial recognition, healthcare, and social distance. Users need accurate and reliable visual data to take full advantage of video analytics applications, but video data quality is often affected by environmental factors such as rain, night conditions, or crowds (with multiple images of overlapping people in other scenes). Through computer vision and in-depth learning, a team of researchers led by Robby Tan, a professor at Yale-NUS College, also a faculty of engineering at Singapore National University (NUS), has developed new approaches to solving the low-vision video problem caused by rain and night conditions, and improves the accuracy of 3D human evaluation of videos.

The study was presented at the 2021 Computer Vision and Pattern Recognition Conference (CVPR).

Combating visibility problems in rain and night

Night scenes are affected by low light and man-made light effects such as glare, glow, and floodlights, while rain scenes are affected by rain streaks or rain accumulation (or rain masking effect).

“Many computer vision systems, such as auto-surveillance and self-driving cars, rely on the clear visibility of input videos to work well. , said Professor Tan.

In two separate studies, the assistant prof. Tan and his team presented in-depth learning algorithms to improve the quality of night videos and rain videos. In the first study, they emphasized brightness, but at the same time suppressed noise and light effects (glare, glow, and floodlights) to produce clear night shots. This technology is new and meets the challenge of clarity in night images and videos when glare cannot be ignored. By comparison, current state-of-the-art methods are incapable of dealing with glare.

In tropical countries like Singapore, where heavy rain is common, the effect of rain can significantly reduce the visibility of videos. In another study, the researchers proposed a method that uses frame orientation to allow them to obtain better visual information without rain clippings randomly affecting different frames and affecting image quality. They later used a mobile camera to assess the depth to remove the masking effect of rain caused by accumulated raindrops. Unlike current methods that focus on removing rain streaks, new methods can remove both rain streaks and rain masking effect simultaneously.

3D Human Estimation Posing: Combating Video Duplication, Multiple Human Blur

At the CVPR conference, Assistant Prof. Tan also presented his team’s study on 3D human assessment, which can be used in video surveillance, video games, and sports broadcasts, for example.

In recent years, 3D multi-staff evaluation of monocular video (video taken from a single camera) has become an increasing priority for researchers and developers. Instead of using multiple cameras to take videos from different locations, monocular videos offer more flexibility because they can be taken with a single standard camera – even a cell phone camera.

However, the accuracy of human perception is affected by high activity, i.e., multiple individuals within the same scene, especially when individuals interact closely or appear to overlap in monocular video.

In this third study, the researchers evaluated 3D human figures from video by combining two existing methods, namely a top-down or bottom-up approach. By combining these two approaches, the new method can produce more reliable point estimation in multi-person settings and handle distance (or scale variations) between individuals more robustly.

Among the three researchers involved in the study are Assoc Prof Tan’s team at NUS’s Department of Electrical and Computer Engineering, where he has a joint nomination, and his collaborators from Hong Kong City University, ETH Zurich, and Tencent Game AI Research Center. Her laboratory focuses on research in computer vision and in-depth learning, particularly in low vision, human pose and motion analysis, and in-depth learning applications in healthcare.

“As the next step in our 3D human assessment study, supported by the National Research Foundation, we are looking at how to protect the privacy of videos. Visibility enhancement methods , ”says Professor Tan.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here