学术论文



Inside-out Multi-person 3D Pose Estimation Using the Panoramic Camera Capture System

Qinhai Dong, Yanran Dai, Yuqi Jiang, Dongdong Li, Hongwei Liu, Yong Zhang, Jing Li, Tao Yang.IEEE Transactions on Instrumentation & Measurement. 2023; SCI二区, IF: 5.6


Inside-out Multi-person 3D Pose Estimation Using the Panoramic Camera Capture System

Estimating the 3D human poses of multiple individuals using multiple cameras is a significant research topic within the field of vision-based measurement. Contrary to the classical outside-in camera capture system, the inside-out panoramic camera capture system can cover larger scenes with fewer cameras for 3D human pose estimation. This advancement extends the application of 3D human pose measurement beyond small spaces like motion capture studios. For example, this approach can be utilized for intelligent security surveillance in broad outdoor squares or capturing athletes’ movements in large-scale sports scenes. However, existing inside-out 3D human pose estimation methods that utilize panoramic cameras encounter challenges,particularly in multi-person occlusion scenarios. Aimed at these problems, this paper presents a novel inside-out multi-person 3D human pose estimation method using just a few calibrated panoramic cameras. Specifically, we first propose a cross-view multi-person matching algorithm based on panoramic camera epipolar geometry constraints to improve human body matching robustness across viewpoints. Then, we take advantage of multiple panoramic cameras and introduce a multi-view human pose clustering and fusion algorithm to improve the average recall of 3D human pose estimation. In addition, we propose a multi-view human pose nonlinear optimization algorithm to jointly optimize the weighted reprojection errors of estimated 3D human poses, which can further improve the average precision.We have conducted extensive experiments on the public Panoptic Studio dataset and self-built real and simulated datasets to demonstrate that our method can inside-out estimate 3D human poses using multiple panoramic cameras. Compared to stateof-the-art methods, the 3D human pose omission is greatly reduced through the complementarity of multiple cameras, and the precision of 3D human pose estimation is largely improved by utilizing multi-camera observation information.

Read More
LD-SLAM: A Robust and Accurate GNSS-Aided Multi-Map Method for Long-Distance Visual SLAM

Dongdong Li, Fangbing Zhang, Jiaxiao Feng, Zhijun Wang, Jinghui Fan, Ye Li, Jing Li, Tao Yang.Remote Sensing. 2023; SCI二区, IF: 5.0


LD-SLAM: A Robust and Accurate GNSS-Aided Multi-Map Method for Long-Distance Visual SLAM

Continuous, robust, and precise localization is pivotal in enabling the autonomous operation of robots and aircraft in intricate environments, particularly in the absence of GNSS (global navigation satellite system) signals. However, commonly employed approaches, such as visual odometry and inertial navigation systems, encounter hindrances in achieving effective navigation and positioning due to issues of error accumulation. Additionally, the challenge of managing extensive map creation and exploration arises when deploying these systems on unmanned aerial vehicle terminals. This study introduces an innovative system capable of conducting long-range and multi-map visual SLAM (simultaneous localization and mapping) using monocular cameras equipped with pinhole and fisheye lens models. We formulate a graph optimization model integrating GNSS data and graphical information through multi-sensor fusion navigation and positioning technology. We propose partitioning SLAM maps based on map health status to augment accuracy and resilience in large-scale map generation. We introduce a multi-map matching and fusion algorithm leveraging geographical positioning and visual data to address excessive discrete mapping, leading to resource wastage and reduced map-switching efficiency. Furthermore, a multi-map-based visual SLAM online localization algorithm is presented, adeptly managing and coordinating distinct geographical maps in different temporal and spatial domains. We employ a quadcopter to establish a testing system and generate an aerial image dataset spanning several kilometers. Our experiments exhibit the framework’s noteworthy robustness and accuracy in long-distance navigation. For instance, our GNSS-assisted multi-map SLAM achieves an average accuracy of 1.5 m within a 20 km range during unmanned aerial vehicle flights.

Read More
Calibration-Free Cross-Camera Target Association Using Interaction Spatiotemporal Consistency

Congcong Li;Jing Li;Yuguang Xie;Jiayang Nie;Tao Yang;Zhaoyang Lu.IEEE Transactions on Multimedia. 2022; SCI一区,IF:6.513


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

In this paper, we propose a novel calibration-free cross-camera target association algorithm that aims to relate local visual data of the same object across cameras with overlapping FOVs. Unlike other methods using object's own characteristics, our approach makes full use of the interactions between objects and explores their spatiotemporal consistency in projection transformation to associate cameras. It has wider applicability in deployed overlapping multi-camera systems with unknown or rarely available calibration data, especially if there is a large perspective gap between cameras. Specifically, we first extract trajectory intersection which is one of the typical object-object interactive behaviors from each camera for feature vector construction. Then, based on the consistency of object-object interactions, we propose a multi-camera spatiotemporal alignment method via wide-domain cross-correlation analysis. It realizes time synchronization and spatial calibration of the multi-camera system simultaneously. After that, we introduce a cross-camera target association approach using aligned object-object interactions. The local data of the same target are successfully associated across cameras without any additional calibration. Extensive experimental evaluations on different databases verify the effectiveness and robustness of our proposed method.

Read More
Time Series Fusion-based Multi-camera Self-calibration For Free-view Video Generation in Low-texture Sports Scene

Feng Zhou, Jing Li,Yanran Dai, Lichen Liu, Haidong Qin, Yuqi Jiang, Shikuan Hong, Bo Zhao, Tao Yang.IEEE SENSORS JOURNAL. 2022; IF:4.325


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Abstract— Multicamera calibration is an important technique for generating free-view video. By arranging multiple cameras in a scene after camera calibration and image processing, a multidimensional viewing experience can be presented to the audience. To address the problem that low texture cannot be robustly self-calibrated in common sports scenes when placing artificial markers or towers in the calibration process is impractical, this article proposes a robust multicamera calibration method based on sequence feature matching and fusion. Additionally, to validate the effectiveness of the proposed calibration algorithm, a virtual axis fast bullet-time synthesis algorithm is proposed for generating a free-view video. First, camera self-calibration is performed in low-texture situations by fusing dynamic objects in time series to enrich geometric constraints in scenes without the use of calibration panels or additional artificial markers. Second, a virtual-axis bullet-time video synthesis method based on the calibration result is proposed. In the calibrated multicamera scenario, a fast bullet-time video is generated by constructing a virtual axis. Qualitative and quantitative experiments in comparison with a state-of-the-art calibration method demonstrate the validity and robustness of the proposed calibration algorithm for free-view video synthesis tasks.

Read More
Real-time Distance Field Acceleration based Free Viewpoint Video Synthesis in Large Sport Field

Yanran Dai, Jing Li, Yuqi Jiang, Haidong Qin, Bang Liang, Shikuan Hong, Haozhe Pan,Tao Yang. Computational Visual Media. 2022; SCI二区,IF:4.127


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience. This technology enhances the interactivityand freedom of multimedia performance. However, many free-viewpoint video synthesis methods are hard to satisfy the requirements of real-time and high precision,particularly in sports competitions with large areas and numerous objects. To address these issues, we propose a free-viewpoint video synthesis method based on distance field acceleration. The central idea is to fuse multi-view distance field information and use it to adjust the search step size adaptively. Adaptive step size search is applied in two aspects: fast estimation of multi-object three-dimensional surfaces and synthetic view rendering based on global occlusion judgement. And we implement parallel computing and interactive display of this method on the CUDA and OpenGL interoperability frameworks. Afterward, we build real-world and simulated experimental platforms to obtain sufficient multi-view image datasets and evaluate the performance of our method. Experimental results show that the proposed method can render freeviewpoint videos with multiple objects in large-scale sports fields at 25 fps.Furthermore, the visual quality of our synthetic novel viewpoint images outperforms state-of-the-art neural-rendering-based methods.

Read More
Bullet-time Video Synthesis Based on Virtual Dynamic Target Axis

Haidong Qin, Jing Li, Yuqi Jiang, Yanran Dai, Shikuan Hong, Feng Zhou, Zhijun Wang, Tao Yang. IEEE Transactions on Multimedia . 2022; SCI一区,IF:6.513


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Bullet-time videos have been widely used in movies,TV advertisements, and computer games, and can produce an immersive and smooth orbital free-viewpoint of frozen action. However, existing bullet-time video synthesis methods remain challenging in practical applications, especially in complex situations with poor camera calibration and a variety of camera array structures. This paper proposes a novel bullet-time video synthesis method based on a virtual dynamic target axis. We adopt an image similarity transformation strategy to eliminate image distortion in the bullet-time video. We use a high-order polynomial curve fitting strategy to reserve more bullet-time video frame content. The proposed dynamic target axis strategy can support various camera array structures, including camera arrays with and without a common field of view. In addition, this strategy can also tolerate poor camera calibration situations with unevenly distributed reprojection errors to some extent and synthesize smooth bullet-time videos without high-precision camera calibration. Qualitative and quantitative experiments in real environments and on simulation platforms demonstrate the high performance of our bullet-time video synthesis method. Compared with the state-of-the-art methods, the proposed method shows superiority

Read More
Online Ground Multitarget Geolocation Based on 3D Map Construction Using a UAV Platform

Fangbing Zhang, Tao Yang, Yi Bai, Yajia Ning, Ye Li, Jinghui Fan, Dongdong Li. IEEE Transactions on Geoscience and Remote Sensing. 2022; SCI一区, IF:5.6


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Geolocating multiple targets of interest on the ground from an aerial platform is an important activity in many applications, such as visual surveillance. However, due to the limited measurement accuracy of commonly used airborne sensors (including altimeters, accelerometer, gyroscopes, etc.) and the small size, complex motion, and a large number of ground targets in aerial images, most of the current unmanned aerial vehicle (UAV)-based ground target geolocation algorithms have difficulty obtaining accurate geographic location coordinates online, especially at middle and high altitudes. To solve these problems, in this paper, a novel online ground multitarget geolocation framework using a UAV platform is proposed, which minimizes the introduction of sensor error sources and uses only monocular aerial image sequences and Global Positioning System (GPS) data to perform parallel processing of target detection and rapid three-dimensional (3D) sparse geographic map construction and target geographic location estimations, thereby improving the accuracy and speed of ground multitarget online geolocation.

Read More
Multi-camera Joint Spatial Self-organization for Intelligent Interconnection Surveillance

Congcong Li, Jing Li , Yuguang Xie, Jiayang Nie, Tao Yang, Zhao yang Lu. Engineering Applications of Artifical Intelligence . 2021; SCI二区,IF:6.212


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

The construction of smart city makes information interconnection play an increasingly important role in intelligent surveillance systems. Especially the interconnection among massive cameras is the key to realizing the evolution from current fragmented monitoring to interconnection surveillance. However, it remains a challenging problem in practical systems due to large sensor quantity, various camera types, and complex spatial layout. Aimed at this problem, this paper proposes a novel multi-camera joint spatial self-organization approach, which realizes interconnection surveillance by unifying cameras into one imaging space. Differing from existing back-end data association strategy, our method takes front-end data calibration as a breakthrough to relate surveillance data...

Read More
Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Fangbing Zhang, TaoYang, Linfeng Liu,Bang Liang, YiBai, Jing Li. IEEE Transactions on Multimedia . 2020; SCI一区,IF:5.452


Image-only Real-time Incremental UAV Image Mosaic for Multi-strip Flight

Limited by aircraft flight altitude and camera parameters, it is necessary to obtain wide-angle panoramas quickly by stitching aerial images, which is helpful in rapid disaster investigation, recovery after earthquakes, and aerial reconnaissance. However, most existing stitching algorithms do not simultaneously meet practical real-time, robustness, and accuracy requirements, especially in the case of a long-distance multistrip flight. In this paper, we propose a novel imageonly real-time UAV image mosaic framework for long-distance multistrip flights that does not require any auxiliary information, such as GPS or GCPs...

Read More
UAV-Assisted Wide Area Multi-Camera Space Alignment Based on Spatiotemporal Feature Map

Jing Li, Yuguang Xie, Congcong Li,Yanran Dai, Jiaxin Ma, Zheng Dong, Tao Yang. Remote Sensing . 2021, 13(6), 1117; https://doi.org/10.3390/rs13061117 IF:4.118


UAV-Assisted Wide Area Multi-Camera Space Alignment Based on Spatiotemporal Feature Map

In this paper, we investigate the problem of aligning multiple deployed camera into one united coordinate system for cross-camera information sharing and intercommunication. However, the difficulty is greatly increased when faced with large-scale scene under chaotic camera deployment. To address this problem, we propose a UAV-assisted wide area multi-camera space alignment approach based on spatiotemporal feature map. It employs the great global perception of Unmanned Aerial Vehicles (UAVs) to meet the challenge from wide-range environment. Concretely, we first present a novel spatiotemporal feature map construction approach to represent the input aerial and ground monitoring data...

Read More
Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao. IEEE Transactions on Image Processing . 2020; IF:6.790


Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

Existing deep learning methods for action recognition in videos require a large number of labeled videos for training, which is labor-intensive and time-consuming. For the same action, the knowledge learned from different media types, e.g., videos and images, may be related and complementary. However, due to the domain shifts and heterogeneous feature representations between videos and images, the performance of classifiers trained on images may be dramatically degraded when directly deployed to videos. In this paper, we propose a novel method, named Deep Image-to-Video Adaptation and Fusion Networks (DIVAFN)...

Read More
Joint Deep and Depth for Object-level Segmentation and Stereo Tracking in Crowds

Jing Li, Lisong Wei, Fangbing Zhang, Tao Yang, Zhaoyang Lu. IEEE Transactions on Multimedia . 2019 DOI:10.1109/TMM.2019.2908350 IF:5.452


Joint Deep and Depth for Object-level Segmentation and Stereo Tracking in Crowds

Tracking multiple people in crowds is a fundamental and essential task in the multimedia field. It is often hindered by difficulties such as dynamic occlusion between objects, cluttered background and abrupt illumination changes. To respond to this need, in this paper, we combine deep and depth to build a stereo tracking system for crowds. The core of the system is the fusion of the advantages of deep learning and depth information, which is exploited to achieve object segmentation and improve the multiobject tracking performance in severe occlusion...

Read More
An Adaptive Framework for Multi-Vehicle Ground Speed Estimation in Airborne Videos

Jing Li, Shuo Chen, Fangbing Zhang, Erkang Li ,Tao Yang, Zhaoyang Lu. Remote Sensing . 2019 11(10), 1241; https://doi.org/10.3390/rs11101241 IF:4.118


An Adaptive Framework for Multi-Vehicle Ground Speed Estimation in Airborne Videos

With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic moving background, small target size, complicated environment, and diverse scenes. In this paper, we propose a novel adaptive framework for multi-vehicle ground speed estimation in airborne videos. Firstly, we build a traffic dataset based on UAV...

Read More
Visual Detail Augmented Mapping for Small Aerial Target Detection

Jing Li,Yanran Dai,Congcong Li, Junqi Shu, Dongdong Li, Tao Yang,Zhaoyang Lu. Remote Sensing . 2019, 11(1),14 IF:4.118


Visual Detail Augmented Mapping for Small Aerial Target Detection

Moving target detection plays a primary and pivotal role in avionics visual analysis, which aims to completely and accurately detect moving objects from complex backgrounds. However, due to the relatively small sizes of targets in aerial video, many deep networks that achieve success in normal size object detection are usually accompanied by a high rate of false alarms and missed detections. To address this problem, we propose a novel visual detail augmented mapping approach for small aerial target detection. Concretely, we first present a multi-cue foreground segmentation algorithm including motion and grayscale information to extract potential regions...

Read More
Data-Driven Variable Synthetic Aperture Imaging Based on Semantic Feedback

CongCong Li, Jing Li, Yanran Dai, Tao Yang, Yuguang Xie, Zhaoyang Lu. IEEE Access . 2019; DOI: 10.1109/ACCESS.2019.2953560 IF:4.098


Data-Driven Variable Synthetic Aperture Imaging Based on Semantic Feedback

Synthetic aperture imaging, which has been proved to be an effective approach for occluded object imaging, is one of the challenging problems in the field of computational imaging. Currently most of the related researches focus on fixed synthetic aperture which usually accompanies with mixed observation angle and foreground de-focus blur. But the existence of them is frequently a source of perspective effect decrease and occluded object imaging quality degradation. In order to solve this problem, we propose a novel data-driven variable synthetic aperture imaging based on semantic feedback...

Read More
Multiple-Object-Tracking Algorithm Based on Dense Trajectory Voting in Aerial Videos

Tao Yang, Dongdong Li, Yi Bai, Fangbing Zhang, Sen Li, Miao Wang, Zhuoyue Zhang, Jing Li. Remote Sensing . 2019 11(19), 2278; https://doi.org/10.3390/rs11192278 IF:4.118


Multiple-Object-Tracking Algorithm Based on Dense Trajectory Voting in Aerial Videos

In recent years, UAV technology has developed rapidly. Due to the mobility, low cost, and variable monitoring altitude of UAVs, multiple-object detection and tracking in aerial videos has become a research hotspot in the field of computer vision. However, due to camera motion, small target size, target adhesion, and unpredictable target motion, it is still difficult to detect and track targets of interest in aerial videos, especially in the case of a low frame rate where the target position changes too much. In this paper, we propose a multiple-object-tracking algorithm based on dense-trajectory voting in aerial videos...

Read More
Panoramic UAV Surveillance and Recycling System based on Structure-free Camera Array

Tao Yang , Zhi Li , Fangbing Zhang , Bolin Xie , Jing Li , Linfeng Liu. IEEE Access . 2019DOI: 10.1109/ACCESS.2019.2900167 IF:4.098


Panoramic UAV Surveillance and Recycling System based on Structure-free Camera Array

In recent years, unmanned aerial vehicles (UAVs) have rapidly developed, but the illegal use of UAVs by civilians has resulted in disorder and security risks and has increasingly triggered community concern and worry. Therefore, the monitoring and recycling of UAVs in key regions is of great significance. This paper presents a novel panoramic UAV surveillance and autonomous recycling system that is based on an unique structure-free fisheye camera array and has the capability of real-time UAV detection...

Read More
Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition

Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang IEEE Transactions on Circuits and Systems for Video Technology . 2018, 1-1; doi:10.1109/TCSVT.2018.2868123 IF: 5.452


Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition

Recognizing human actions from varied views is challenging due to huge appearance variations in different views. The key to this problem is to learn discriminant view-invariant representations generalizing well across views. In this paper, we address this problem by learning view-invariant representations hierarchically using a novel method, referred to as Joint Sparse Representation and Distribution Adaptation (JSRDA).. . .

Read More
Cross-domain Co-occurring Feature for Visible-infrared Image Matching

Jing Li, Congcong Li, Tao Yang, Zhaoyang Lu.IEEE Access. DOI: 10.1109/ACCESS.2018.2820680 2018 IF:4.098


Cross-domain Co-occurring Feature for Visible-infrared Image Matching

As the two most commonly used imaging devices, infrared sensor and visible sensor play a vital and essential role in the field of heterogeneous image matching. Therefore, visible-infrared image matching which aims to search images across them has important application and theoretical significance. However, due to the vastly different imaging principles, how to accurately match between visible and infrared image remains a challenge. . .

Read More
Global Temporal Representation Based CNNs for Infrared Action Recognition

Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao IEEE Signal Processing Letters. DOI: 10.1109/LSP.2018.2823910 2018 IF: 2.582


Global Temporal Representation Based CNNs for Infrared Action Recognition

Infrared human action recognition has many advantages, i.e., it is insensitive to illumination change, appearance variability, and shadows. Existing methods for infrared action recognition are either based on spatial or local temporal information, however, the global temporal information, which can better describe the movements of body parts across the whole video. . .

Read More
Hybrid Camera Array-Based UAV Auto-Landing on Moving UGV in GPS-Denied Environment

Tao Yang, Qiang Ren,Fangbing Zhang ,Bolin Xie, Hailei Ren, Jing Li,Yanning Zhang. Remote Sensing . 2018, 10(11), 1829; https://doi.org/10.3390/rs10111829 (registering DOI)IF: 4.118


Hybrid Camera Array-Based UAV Auto-Landing on Moving UGV in GPS-Denied Environment

With the rapid development of Unmanned Aerial Vehicle (UAV) systems, the autonomous landing of a UAV on a moving Unmanned Ground Vehicle (UGV) has received extensive attention as a key technology. At present, this technology is confronted with such problems as operating in GPS-denied environments, a low accuracy of target location, the poor precision of the relative motion estimation, delayed control responses, slow processing speeds, and poor stability. To address these issues, we present a hybrid camera array-based autonomous landing UAV that can land on a moving UGV in a GPS-denied environment...

Read More
Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments

Tao Yang, Peiqi Li, Huiming Zhang, Jing Li, Zhi Li Electronics. DOI: 10.3390/electronics7050073 2018 IF: 2.110


Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments

With the popularization and wide application of drones in military and civilian fields, the safety of drones must be considered. At present, the failure and drop rates of drones are still much higher than those of manned aircraft. Therefore, it is imperative to improve the research on the safe landing and recovery of drones. However, most drone navigation methods rely on global positioning system (GPS) signals. . .

Read More
Real-Time Ground Vehicle Detection in Aerial Infrared Imagery Based on Convolutional Neural Network

Xiaofei Liu, Tao Yang, and Jing Li Electronics. 2018,7(6),78; https://doi.org/10.3390/electronics7060078 IF:2.110


Real-Time Ground Vehicle Detection in Aerial Infrared Imagery Based on Convolutional Neural Network

An infrared sensor is a commonly used imaging device. Unmanned aerial vehicles, the most promising moving platform, each play a vital role in their own field, respectively. However, the two devices are seldom combined in automatic ground vehicle detection tasks. Therefore, how to make full use of them—especially in ground vehicle detection based on aerial imagery–has aroused wide academic concern. . .

Read More
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model

Jing Li, Fangbing Zhang, Lisong Wei, Tao Yang, Orcid and Zhaoyang Lu.Sensors. DOI: 10.3390/s17102354 2017 IF: 2.677


Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model

Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. .

Read More
A Novel Visual Vocabulary Translator based Cross-domain Image Matching

Jing Li, Congcong Li, Tao Yang, Zhaoyang Lu.IEEE Access. DOI: 10.1109/ACCESS.2017.2759799 October 2017 IF: 3.244


A Novel Visual Vocabulary Translator based Cross-domain Image Matching

Cross-domain image matching, which investigates the problem of searching images across different visual domains such as photo, sketch or painting, has attracted intensive attention in computer vision due to its widespread application. Unlike intra-domain matching, cross-domain images appear quite different in various characteristics.

Read More
Random sampling and model competition for guaranteed multiple consensus sets estimation

Jing Li, Tao Yang, Jingyi Yu.International Journal of Advanced Robotic Systems. DOI:10.1177/1729881416685673 2017 IF: 0.987


Random sampling and model competition for guaranteed multiple consensus sets estimation

Robust extraction of consensus sets from noisy data is a fundamental problem in robot vision. Existing multimodel estimation algorithms have shown success on large consensus sets estimations. One remaining challenge is to extract small consensus sets in cluttered multimodel data set. In this article, we present an effective multimodel extraction method to solve this challenge.

Read More
An improved Fisher discriminant vector employing updated between-scatter matrix

Chao Yao, Zhaoyang Lu, Jing Li, Wei Jiang, Jungong Han. Neurocomputing . 2016 Volume 173, Part 2, 2016, Pages 154-162 DOI:10.1016/j.neucom.2014.11.102 IF:4.072


An improved Fisher discriminant vector employing updated between-scatter matrix

Discriminant analysis is an important and well-studied algorithm in pattern recognition area, and many linear discriminant analysis methods have been proposed over the last few decades. However, in the previous works, the between-scatter matrix is not updated when seeking the discriminant vectors, which causes redundancy for the well separated pairs.

Read More
Kinect based real-time synthetic aperture imaging through occlusion

TaoYang, Wenguang Ma, Sibing Wang, JingLi, Jingyi Yu, Yanning Zhang. Multimed Tools Appl . 2016 75:6925-6943; doi:10.1007/s11042-015-2618-1


Kinect based real-time synthetic aperture imaging through occlusion

Real-time and high performance occluded object imaging is a big challenge to many computer vision applications. In recent years, camera array synthetic aperture theory proves to be a potential powerful way to solve this problem.

Read More
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

Tao Yang, Guangpo Li, Jing Li, Yanning Zhang, Xiaoqiang Zhang, Zhuoyue Zhang, Zhi Li. Sensors 16(9): 1393 2016 IF: 2.677


A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module;

Read More
Small Moving Vehicle Detection in a Satellite Video of an Urban Area

Tao Yang, Xiwen Wang, Bowei Yao, Jing Li, Yanning Zhang, Zhannan He, Wencheng Duan. Sensors 16(9): 1528 2016 IF: 2.677


Small Moving Vehicle Detection in a Satellite Video of an Urban Area

Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing , satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction.

Read More
Diverse Scene Stitching from a Large-Scale Aerial Video Dataset

TaoYang, JingLi, Jingyi Yu, Sibing Wang, Yanning Zhang, Remote Sensing . 2015, 7(6), 6932-6949; doi:10.3390/rs70606932 IF: 3.244


Diverse Scene Stitching from a Large-Scale Aerial Video Dataset

Diverse scene stitching is a challenging task in aerial video surveillance. This paper presents a hybrid stitching method based on the observation that aerial videos captured in real surveillance settings are neither totally ordered nor completely unordered.

Read More
Multiple-Layer Visibility Propagation-Based Synthetic Aperture Imaging through Occlusion

TaoYang, JingLi, Jingyi Yu, Yanning Zhang,WenguangMa, Xiaomin Tong, RuiYu,Lingyan Ran, Sensors 2015, 15(8), 18965-18984; doi:10.3390/s150818965 IF: 2.677


Multiple-Layer Visibility Propagation-Based Synthetic Aperture Imaging through Occlusion

Heavy occlusions in cluttered scenes impose significant challenges to many computer vision applications. Recent light field imaging systems provide new see-through capabilities through synthetic aperture imaging (SAI) to overcome the occlusion problem.

Read More
A subset method for improving Linear Discriminant Analysis

Chao Yao, Zhaoyang Lu, Jing Li, Yamei Xu, Jungong Han. Neurocomputing, 2014,Volume 138, Pages 310-315, DOI:10.1016/j.neucom.2014.02.004 IF:4.072


A subset method for improving Linear Discriminant Analysis

Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it suffers from class separation problem for C-class when the reduced dimensionality is less than C−1. To cope with this problem, we propose a subset improving method in this paper.

Read More
Fast Aerial Video Stitching

Jing Li, Tao Yang*, Jingyi Yu, Zhaoyang Lu, Ping Lu, Xia Jia, Wenjie Chen. International Journal of Advanced Robotic Systems, 2014, 11:167. doi: 10.5772/59029 IF:0.987


Fast Aerial Video Stitching

The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs) is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences.

Read More
Pepole tracking through occlusion

TaoYang,JingLi,QuanPan,Yanning Zhang. ACTA AUTOMATICA SINICA, 2010,36(3): 375-384


Pepole tracking through occlusion

This paper presents a novel real-time multiple object tracking algorithm, which contains three parts: region correlation based foreground segmentation, merging-splitting based data association and greedy searching based occluded object localization.

Read More
A multiple layer background model for foreground detection

TaoYang,JingLi,QuanPan,YongmeiCheng. Chinese JOURNAL OF IMAGE AND GRAPHICS.2008, 13(7):1303~1308


A multiple layer background model for foreground detection

Foreground detection is an important research problem in visual surveillance. In this paper, we present a novel multiple layer background model to detect and classify foreground into three classes, moving  object, static object and ghost. The background is divided into two layers, reference background and dynamic background.

Read More
All-In-Focus Synthetic Aperture Imaging

Tao Yang, Yanning Zhang, Jingyi Yu, Jing Li, Wenguang Ma, Xiaomin Tong, Rui Yu, Lingyan Ran. ECCV (6) 2014: 1-15


All-In-Focus Synthetic Aperture Imaging

Heavy occlusions in cluttered scenes impose significant challenges to many computer vision applications. Recent light field imaging systems provide new see-through capabilities through synthetic aperture imaging (SAI) to overcome the occlusion problem.


Read More
Real-time multiple object tracking with occlusion handling in dynamic scenes

TaoYang, Stan Z.Li ,Quan Pan, Jing Li. IEEE Computer Vision and Pattern Recognition Conference (CVPR), San Diego, USA, 2005, 970~975 (Cited by Google Scholar: 200+)


Real-time multiple object tracking with occlusion handling in dynamic scenes

This work presents a real-time system for multiple object tracking in dynamic scenes. A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a prior knowledge about the shape or motion of objects. The system produces good segment and tracking results at a frame rate of 15-20 fps for image size of 320x240...


Read More
DOTS: Support for effective video surveillance

Andreas Girgensohn, Don Kimber, Jim Vaughan, Tao Yang, Frank Shipman, Thea Turner, Eleanor Rieffel, Lynn Wilcox, Francine Chen, Tony Dunnigan. ACM Multimedia 2007(ACM_MM, Full Paper), Augsburg, Germany, September 2007, 423~432


DOTS: Support for effective video surveillance.

DOTS (Dynamic Object Tracking System) is an indoor, real-time, multi-camera surveillance system, deployed in a real office setting. DOTS combines video analysis and user interface components to enable security personnel to effectively monitor views of interest and to perform tasks such as tracking a person.


Read More
Active learning based pedestrian detection in real scenes

TaoYang, Jing Li, Quan Pan, Chunhui Zhao, Yiqiang Zhu. 18th International Conference on Pattern Recognition (ICPR), HongKong, China, 2006, 904~907


Active learning based pedestrian detection in real scenes

This work presents an active learning based method for pedestrian detection in complicated real-world scenes. Through analyzing the distribution of all positive and negative samples under every possible feature, a highly efficient weak classifier selection method is presented. Moreover, a novel boosting architecture is given to get satisfied False Positive Rate (FPR) and False Negative Rate (FNR) with few weak classifiers.


Read More
Multiple Pedestrian Tracking Based on Multi-layer Graph with Tracklet Segmentation and Merging

Wencheng Duan, Tao Yang, Jing Li, Yanning Zhang. CCBR . 2016: 728-735


Multiple Pedestrian Tracking Based on Multi-layer Graph with Tracklet Segmentation and Merging

Multiple pedestrian tracking is regarded as a challenging work due to difficulties of occlusion, abrupt motion and changes in appearance. In this paper, we propose a multi-layer graph based data association framework to address occlusion problem. Our framework is hierarchical with three association layers and each layer has its corresponding association method.

Read More