CN110211156B - Time-space information combined online learning method - Google Patents

Time-space information combined online learning method Download PDF

Info

Publication number
CN110211156B
CN110211156B CN201910480901.8A CN201910480901A CN110211156B CN 110211156 B CN110211156 B CN 110211156B CN 201910480901 A CN201910480901 A CN 201910480901A CN 110211156 B CN110211156 B CN 110211156B
Authority
CN
China
Prior art keywords
network
target tracking
pedestrian
target
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910480901.8A
Other languages
Chinese (zh)
Other versions
CN110211156A (en
Inventor
赵佳琦
马丁
周勇
夏士雄
姚睿
杜文亮
陈莹
朱东郡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Guanglian Technology Co ltd
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201910480901.8A priority Critical patent/CN110211156B/en
Publication of CN110211156A publication Critical patent/CN110211156A/en
Application granted granted Critical
Publication of CN110211156B publication Critical patent/CN110211156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a time-space information combined online learning method, which improves efficiency by using a target tracking algorithm and a pedestrian search algorithm, and interactively trains a tracking network and a pedestrian search network. The method comprises the following specific steps: (1) inputting video stream data; (2) running a network to expand samples; (3) the pedestrian searching network and the target tracking network take actions according to the network states at the same time. The invention combines the pedestrian searching network and the target tracking network, and has the advantages of strong robustness and high operation speed.

Description

Time-space information combined online learning method
Technical Field
The invention relates to a time-space information combined online learning method, which is an image processing technology related to pedestrian search and target tracking.
Background
At present, monitoring cameras are installed in public places, government departments, enterprises and public institutions, residential districts and even families of many residents with dense personnel, and reliable video monitoring resources are provided for maintaining social security and guaranteeing life and property safety of people. In video monitoring, because parameters such as the resolution ratio of a camera, the shooting angle and the like are changed greatly, stable acquisition of high-quality face pictures is difficult to realize, and the target tracking stability based on the face recognition technology is poor. In contrast, pedestrian Search (Person Search) technology may provide a more robust target tracking solution for video surveillance.
The traditional pedestrian searching technology is divided into two parts of target detection and pedestrian re-identification, the target detection aims at searching an interested target from a picture and accurately positioning the target, and as the target is shot at different angles and distances, the shape, the posture and the relative size of the target are changed, and the interference of factors such as illumination, shielding and the like during imaging is added, the target detection is always one of the most challenging problems in the field of computer vision. Pedestrian re-identification is a computer vision technique that determines whether a particular pedestrian exists in an image or video library, and is to confirm the identity of the pedestrian based on target detection. At present, feature learning, metric learning and generative confrontation network models are widely applied to the field of pedestrian re-identification. The pedestrian searching technology mainly utilizes the spatial structure information of the image, and the utilization rate of the interframe information of the video sequence is not high. The video target tracking technology mainly utilizes interframe information of videos to efficiently position interested targets, however, due to the reasons of deformation, sudden motion, environmental change and the like of the targets, the performance of target tracking is greatly influenced.
Scholars at home and abroad carry out systematic and deep research on three aspects of target detection, pedestrian re-identification and target tracking, and a plurality of methods are proposed. However, in practical application scenarios, in order to increase the effective monitoring range, the camera may be placed at a higher position, which results in a smaller size of the pedestrian target in the whole monitoring screen and is easily shielded by foreign objects such as trees and buildings. In some areas with high pedestrian flow and dense pedestrians, overlapping and shielding are easy to generate among multiple pedestrian targets. Under the influence of factors such as low definition of a monitored picture, different illumination and shooting angles, and similar clothing of pedestrians, pedestrians with different identities may have similar characteristics. The target tracking mainly processes single-camera data, and pedestrian searching can process video data of multiple cameras, so that the system is more suitable for practical application scenes. The target tracking method can provide technical assistance and method reference for pedestrian search.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a time-space information combined online learning method, which updates a pedestrian search network and a target tracking network by using reinforcement learning by utilizing the time-space characteristic of target tracking and the re-identification characteristic of pedestrian search, has higher operation speed, and can improve the accuracy and timeliness of pedestrian search and target tracking.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a time-space information combined online learning method comprises the following steps:
(1) inputting video stream data;
(2) simultaneously operating a pedestrian search network and a target tracking network, comprising the steps of:
(21) tracking a tracking target in the video stream by using a target tracking network, and simultaneously setting the tracking target of the target tracking network as a search target of a pedestrian search network;
(22) sampling the search target every n frames, and storing the sampling results in the extended sample set C1 ═ C in time sequence1_t,c1_t-1,c1_t-2… }; sampling the tracking target every n frames, and storing sampling results to an extended sample set C2 ═ C in time sequence2_t,c2_t-1,c2_t-2,…};
(3) And (3) reinforcement learning strategy:
(31) for the reinforcement learning of the target tracking network, the following two cases are divided:
the following conditions are: if the target tracking accuracy of the target tracking network is lower than 90%, expanding the expanded sample set C1 to the target tracking network sample set according to a reinforcement learning strategy, and optimizing the current target tracking network;
case two: if the target tracking accuracy of the target tracking network is higher than or equal to 90%, maintaining the current target tracking network;
(32) the reinforced learning aiming at the pedestrian search network is divided into the following two cases:
the following conditions are: if the search accuracy of the pedestrian search network is lower than 90%, expanding the expansion sample set C2 to a pedestrian search network sample set according to a reinforcement learning strategy, and optimizing the current pedestrian search network;
case two: and if the searching accuracy of the pedestrian searching network is higher than or equal to 90%, maintaining the current pedestrian searching network.
In the scheme, whether a pedestrian search network sample set needs to be adjusted or not is judged according to the search accuracy of the pedestrian search network, and then the pedestrian search network is optimized by using a reinforcement learning strategy; and judging whether the target tracking network sample set needs to be adjusted or not according to the target tracking accuracy of the target tracking network, and then optimizing the target tracking network by using a reinforcement learning strategy.
Pedestrian search networks are a computer vision technique that finds the pedestrian's motion trajectory from a video stream taken by a single or multiple cameras, given the identity of the pedestrian. However, in an actual video monitoring environment, due to the limitation of the position and the visual field of the camera, a single monitoring camera is difficult to realize the full coverage of a target monitoring area, even if a plurality of cameras are used, the seamless, overlapping and omnibearing coverage of the target monitoring area is also difficult to realize, so that the existing pedestrian searching technology is still difficult to meet the requirement of real-time target matching in a large-scale intelligent monitoring system. Meanwhile, the pedestrian search is also susceptible to factors such as different resolutions, pedestrians with similar clothes, illumination change, visual angle change and foreign matter shielding. At present, although a large amount of video data is accumulated by a large number of monitoring cameras, the collected data corresponding to each pedestrian is very rare. Therefore, the pedestrian search task is a typical big-data small-sample problem, and how to dig out important information hidden in the large amount of video data, and meanwhile, learning the distinguishing features of pedestrians by using a small amount of pedestrian data is the key for solving the pedestrian search task.
The target tracking network predicts the size and position of a target in an initial frame of a video sequence given the size and position of the target in a subsequent frame. Generally, object tracking faces several difficulties: appearance distortion, illumination change, fast motion and motion blur, background similarity interference and the like, which can cause loss of the tracked target. Aiming at the problem of target loss, the invention introduces a pedestrian search network, and finds the motion trail of the pedestrian from a video stream shot by a single camera or a plurality of cameras according to the property of the pedestrian search network, namely the identity of the pedestrian.
Specifically, in the step (3), the reinforcement learning strategy specifically includes:
(41) initialization: the target tracking network sample set is U1, and the action set A1 ═ a1_0,a1_1,a1_2,a1_3… }; the pedestrian search network sample set is U2, and the action set A2 ═ a2_0,a2_1,a2_2,a2_3… }; action aj_iMeans that the latest im sample frames in the extended sample set Cj are extended into Uj, i.e. Uj ═ { Uj, cj_t,cj_t-1,cj_t-2,…,cj_t-im+1}, action aj_iA prize value of rj_ai,{rj_ai0, m is a positive integer, 1, 2; entering a step (42);
(42) target tracking accuracy g for tracking target tracking networks respectively1And search accuracy g of pedestrian search network2: if g isjIf the temperature is lower than 90%, entering a step (43); otherwise, go to step (42), i.e. keep track of gjUntil lower than g is foundj90%;
(43) Execute max rj_aiCorresponding action aj_iI.e. performing the action with the highest reward value, continuing to track gj: if g isjIf the temperature is lower than 90%, entering a step (44); otherwise, entering step (45);
(44) i equals i +1, and performs action aj_iContinue to track gj: if the value is less than 90%, entering a step (44); otherwise, entering step (45);
(45) entering this step, the execution of action a is indicatedj_iThen, g is improvedjAnd reaches 90% of the set threshold value, and gives action aj_iA prize, rj_ai=rj_ai+1, return to step (42).
The reinforcement learning strategies aiming at the target tracking network and the pedestrian searching network are the same, but the target tracking network and the pedestrian searching network are mutually non-interfering and independent. According to the whole process of the reinforcement learning strategy, the reinforcement learning strategy is a method for optimizing a pedestrian search network or a target tracking network by selecting the action with the highest reward value, so that the pedestrian search network and the target tracking network can maintain a higher accuracy.
Specifically, in the step (43), if there are two or more max { r }j_aiCorresponding action aj_iExecuting the action with i minimum, wherein i is not equal to 0; the limitation of step (43) indicates that when the reward values of multiple actions are the same, we prefer to increase fewer sample frames, so as to increase the system operation speed as much as possible.
Preferably, in the step (22), n is more than or equal to 30; because the characteristics of adjacent frames are close, if the interval of the collected data samples is too small, the diversity of the characteristics of the samples cannot be ensured; if the acquisition data sample interval is too large, the target may be lost.
Preferably, in the step (41), m is more than or equal to 30; when the problem of accuracy rate reduction occurs, the target characteristics of the current frame are not obvious, so that at least 30 frames of samples are expanded forwards from the current frame to enter a sample set, the similarity of the characteristics in the 30 frames is higher, the characteristics of the accuracy rate reduction frame can be well supplemented, and the search accuracy rate is improved.
The method utilizes reinforcement learning to map the environmental state to the characteristic of action, and then carries out data collection on the pedestrian search target through a target tracking algorithm. In the pedestrian searching process, two actions exist in the pedestrian searching network, namely an action for fine adjustment of the model and an action for maintaining the model unchanged. The pedestrian search network has two states, namely a normal state of the model and a state of insufficient performance of the model. The state of the pedestrian searching network is adjusted by two actions, and the state of the model with insufficient performance is adjusted to the state of the model with normal performance. According to the feedback information of the pedestrian search network, whether samples collected by the target tracking network need to be collected or not is judged, and then the collected difficult sample set is screened by using reinforcement learning, so that the purpose of fine adjustment of the model is achieved. Similarly, the target tracking (search) network is updated in the same way.
Has the advantages that: compared with the prior art, the time-space information combined online learning method provided by the invention has the following advantages:
(1) aiming at large-scale video data, a labeled data set is automatically generated for a pedestrian searching method by adopting a target tracking technology, and data is screened by utilizing a reinforcement learning technology, so that the performance of the pedestrian searching method is improved; the pedestrian searching technology is adopted as a target tracking method to automatically generate a labeled data set, due to the characteristic of pedestrian searching, lost targets can be quickly found and recorded as difficult samples, the reinforcement learning technology is utilized to screen data, and the performance of the target tracking method is improved.
(2) The invention combines the pedestrian searching network and the target tracking network, and can re-search the lost target when tracking the target due to the nature of the pedestrian searching network, thereby improving the efficiency of tracking the target, so that the target tracking network and the pedestrian searching network can be mutually promoted to form a virtuous circle.
Drawings
FIG. 1 is a block diagram of an implementation of the present invention;
fig. 2 is a logic block diagram for implementing the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
Fig. 1 is a block diagram illustrating an implementation flow of a time-space information combined online learning method, which is described in detail below with reference to the accompanying drawings.
Step one, inputting video stream data
At present, monitoring cameras are installed in public places, government departments, enterprises and public institutions, residential districts and even families of many residents with dense personnel, and reliable video monitoring resources are provided for maintaining social security and guaranteeing life and property safety of people. We therefore have a large amount of raw video data that is input as a set of test samples.
Step two, simultaneously operating a pedestrian searching network and a target tracking network
The pedestrian searching technology has low utilization rate on the information between the video frames, and the target tracking method can effectively capture the information between the video frames, but the information understanding capability of the target tracking method on the image space is weak. Therefore, the reinforced learning technology is adopted to combine the pedestrian searching and target tracking technology, so that the time-space information mining of the video data big data can be effectively carried out.
(21) Tracking a tracking target in the video stream by using a target tracking network, and simultaneously setting the tracking target of the target tracking network as a search target of a pedestrian search network;
(22) sampling the search target every n frames, and storing the sampling results in the extended sample set C1 ═ C in time sequence1_t,c1_t-1,c1_t-2… }; sampling the tracking target every n frames, and storing sampling results to an extended sample set C2 ═ C in time sequence2_t,c2_t-1,c2_t-2,…};
Step three, application of reinforcement learning strategy
(31) For the reinforcement learning of the target tracking network, the following two cases are divided:
the following conditions are: if the target tracking accuracy of the target tracking network is lower than 90%, expanding the expanded sample set C1 to the target tracking network sample set according to a reinforcement learning strategy, and optimizing the current target tracking network;
case two: if the target tracking accuracy of the target tracking network is higher than or equal to 90%, maintaining the current target tracking network;
(32) the reinforced learning aiming at the pedestrian search network is divided into the following two cases:
the following conditions are: if the search accuracy of the pedestrian search network is lower than 90%, expanding the expansion sample set C2 to a pedestrian search network sample set according to a reinforcement learning strategy, and optimizing the current pedestrian search network;
case two: and if the searching accuracy of the pedestrian searching network is higher than or equal to 90%, maintaining the current pedestrian searching network.
Step four, concrete flow of reinforcement learning strategy
(41) Initialization: the target tracking network sample set is U1, and the action set A1 ═ a1_0,a1_1,a1_2,a1_3… }; the pedestrian search network sample set is U2, and the action set A2 ═ a2_0,a2_1,a2_2,a2_3… }; action aj_iMeans that the latest im sample frames in the extended sample set Cj are extended into Uj, i.e. Uj ═ { Uj, cj_t,cj_t-1,cj_t-2,…,cj_t-im+1}, action aj_iA prize value of rj_ai,{rj_ai0, m is a positive integer, 1, 2; entering a step (42);
(42) target tracking accuracy g for tracking target tracking networks respectively1And search accuracy g of pedestrian search network2: if g isjIf the temperature is lower than 90%, entering a step (43); otherwise, go to step (42), i.e. keep track of gjUntil lower than g is foundj90%;
(43) Execute max rj_aiCorresponding action aj_iI.e. performing the action with the highest reward value, continuing to track gj: if g isjIf the temperature is lower than 90%, entering a step (44); otherwise, entering step (45);
(44) i equals i +1, and performs action aj_iContinue to track gj: if the value is less than 90%, entering a step (44); otherwise, entering step (45);
(45) entering this step, the execution of action a is indicatedj_iThen, g is improvedjAnd reaches 90% of the set threshold value, and gives action aj_iA prize, rj_ai=rj_ai+1, return to step (42).
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (4)

1. A time-space information combined online learning method is characterized in that: the method comprises the following steps:
(1) inputting video stream data;
(2) simultaneously operating a pedestrian search network and a target tracking network, comprising the steps of:
(21) tracking a tracking target in the video stream by using a target tracking network, and simultaneously setting the tracking target of the target tracking network as a search target of a pedestrian search network;
(22) sampling the search target every n frames, and storing the sampling results in the extended sample set C1 ═ C in time sequence1_t,c1_t-1,c1_t-2… }; sampling the tracking target every n frames, and storing sampling results to an extended sample set C2 ═ C in time sequence2_t,c2_t-1,c2_t-2,…};
(3) And (3) reinforcement learning strategy:
(31) for the reinforcement learning of the target tracking network, the following two cases are divided:
the following conditions are: if the target tracking accuracy of the target tracking network is lower than 90%, expanding the expanded sample set C1 to the target tracking network sample set according to a reinforcement learning strategy, and optimizing the current target tracking network;
case two: if the target tracking accuracy of the target tracking network is higher than or equal to 90%, maintaining the current target tracking network;
(32) the reinforced learning aiming at the pedestrian search network is divided into the following two cases:
the following conditions are: if the search accuracy of the pedestrian search network is lower than 90%, expanding the expansion sample set C2 to a pedestrian search network sample set according to a reinforcement learning strategy, and optimizing the current pedestrian search network;
case two: if the search accuracy of the pedestrian search network is higher than or equal to 90%, maintaining the current pedestrian search network;
the reinforcement learning strategy is specifically as follows:
(41) initialization: the target tracking network sample set is U1, and the action set A1 ═ a1_0,a1_1,a1_2,a1_3… }; the pedestrian search network sample set is U2, and the action set A2 ═ a2_0,a2_1,a2_2,a2_3… }; action aj_iMeans that the latest im sample frames in the extended sample set Cj are extended into Uj, i.e. Uj ═ { Uj, cj_t,cj_t-1,cj_t-2,…,cj_t-im+1}, action aj_iA prize value of rj_ai,{rj_ai0, m is a positive integer, 1, 2; entering a step (42);
(42) target tracking accuracy g for tracking target tracking networks respectively1And search accuracy g of pedestrian search network2: if g isjIf the temperature is lower than 90%, entering a step (43); otherwise, go to step (42), i.e. keep track of gjUntil g is foundjLess than 90%;
(43) execute max rj_aiCorresponding action aj_iI.e. performing the action with the highest reward value, continuing to track gj: if g isjIf the temperature is lower than 90%, entering a step (44); otherwise, entering step (45);
(44) i equals i +1, and performs action aj_iContinue to track gj: if the value is less than 90%, entering a step (44); otherwise, entering step (45);
(45) entering this step, the execution of action a is indicatedj_iThen, g is improvedjAnd reaches 90% of the set threshold value, and gives action aj_iA prize, rj_ai=rj_ai+1, return to step (42).
2. The time-space information combined online learning method according to claim 1, characterized in that: in the step (43), if there are two or more max { r }j_aiCorresponding action aj_iThe action with i minimum is executed, and i ≠ 0.
3. The time-space information combined online learning method according to claim 1, characterized in that: in the step (22), n is more than or equal to 30.
4. The time-space information combined online learning method according to claim 1, characterized in that: in the step (41), m is more than or equal to 30.
CN201910480901.8A 2019-06-04 2019-06-04 Time-space information combined online learning method Active CN110211156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910480901.8A CN110211156B (en) 2019-06-04 2019-06-04 Time-space information combined online learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910480901.8A CN110211156B (en) 2019-06-04 2019-06-04 Time-space information combined online learning method

Publications (2)

Publication Number Publication Date
CN110211156A CN110211156A (en) 2019-09-06
CN110211156B true CN110211156B (en) 2021-02-12

Family

ID=67790530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910480901.8A Active CN110211156B (en) 2019-06-04 2019-06-04 Time-space information combined online learning method

Country Status (1)

Country Link
CN (1) CN110211156B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130339353A1 (en) * 2010-12-06 2013-12-19 Fraunhofe-Gesellschaft zur Foerderung der angewand ten Forschung e.V. Method for Operating a Geolocation Database and a Geolocation Database System
CN106408610A (en) * 2015-04-16 2017-02-15 西门子公司 Method and system for machine learning based assessment of fractional flow reserve
CN108932840A (en) * 2018-07-17 2018-12-04 北京理工大学 Automatic driving vehicle urban intersection passing method based on intensified learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729953B (en) * 2017-09-18 2019-09-27 清华大学 Robot plume method for tracing based on continuous state behavior domain intensified learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130339353A1 (en) * 2010-12-06 2013-12-19 Fraunhofe-Gesellschaft zur Foerderung der angewand ten Forschung e.V. Method for Operating a Geolocation Database and a Geolocation Database System
CN106408610A (en) * 2015-04-16 2017-02-15 西门子公司 Method and system for machine learning based assessment of fractional flow reserve
CN108932840A (en) * 2018-07-17 2018-12-04 北京理工大学 Automatic driving vehicle urban intersection passing method based on intensified learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Multiagent-Based Simulation of Temporal-Spatial Characteristics of Activity-Travel Patterns Using Interactive Reinforcement Learning";Min Yang etc.;《Mathematical Problems in Engineering》;20140130;第2-3节 *
"基于回归与深度强化学习的目标检测算法";舒朗等;《软件导刊》;20181231;第17卷(第12期);第56-59页 *
"基于节点生长k-均值聚类算法的强化学习方法";陈宗海等;《计算机研究与发展》;20060430;第661-665页 *

Also Published As

Publication number Publication date
CN110211156A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
Ren et al. Collaborative deep reinforcement learning for multi-object tracking
CN106856577B (en) Video abstract generation method capable of solving multi-target collision and shielding problems
CN103077539B (en) Motion target tracking method under a kind of complex background and obstruction conditions
CN107833239B (en) Optimization matching target tracking method based on weighting model constraint
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN111161309B (en) Searching and positioning method for vehicle-mounted video dynamic target
CN105913452A (en) Real-time space debris detection and tracking method
CN102663362A (en) Moving target detection method t based on gray features
CN112614159A (en) Cross-camera multi-target tracking method for warehouse scene
Han et al. A method based on multi-convolution layers joint and generative adversarial networks for vehicle detection
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
He et al. Motion pattern analysis in crowded scenes by using density based clustering
Casagrande et al. Abnormal motion analysis for tracking-based approaches using region-based method with mobile grid
Li et al. The integration adjacent frame difference of improved ViBe for foreground object detection
CN107729811B (en) Night flame detection method based on scene modeling
Jiang et al. Surveillance from above: A detection-and-prediction based multiple target tracking method on aerial videos
CN110211156B (en) Time-space information combined online learning method
Gao et al. Moving object detection for video surveillance based on improved ViBe
CN110210405B (en) Pedestrian search sample expansion method based on target tracking
CN107122762A (en) A kind of processing method for compound movement image
CN110197163B (en) Target tracking sample expansion method based on pedestrian search
Sharma Intelligent Querying in Camera Networks for Efficient Target Tracking.
CN103020981A (en) Rapid key frame extraction algorithm based on video moving target
Cao et al. Improved YOLOv5s Network for Traffic Object Detection with Complex Road Scenes
Qi et al. Research on Improved YOLO and DeepSORT Ship Detection and Tracking Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221129

Address after: 221000 Building A13, Safety Technology Industrial Park, Tongshan District, Xuzhou City, Jiangsu Province

Patentee after: XUZHOU GUANGLIAN TECHNOLOGY Co.,Ltd.

Address before: 221008 Tongshan University Road, Xuzhou City, Jiangsu Province, Institute of Scientific Research, China University of Mining and Technology

Patentee before: CHINA University OF MINING AND TECHNOLOGY

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An Online Learning Method Based on Temporal Spatial Information Union

Effective date of registration: 20231108

Granted publication date: 20210212

Pledgee: Xuzhou Huaichang Investment Co.,Ltd.

Pledgor: XUZHOU GUANGLIAN TECHNOLOGY Co.,Ltd.

Registration number: Y2023980064378