CN113362369A - State detection method and detection device for moving object - Google Patents

State detection method and detection device for moving object Download PDF

Info

Publication number
CN113362369A
CN113362369A CN202110635008.5A CN202110635008A CN113362369A CN 113362369 A CN113362369 A CN 113362369A CN 202110635008 A CN202110635008 A CN 202110635008A CN 113362369 A CN113362369 A CN 113362369A
Authority
CN
China
Prior art keywords
moving object
detected
inputting
outputting
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110635008.5A
Other languages
Chinese (zh)
Inventor
秦家虎
周文华
王帅
张展鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110635008.5A priority Critical patent/CN113362369A/en
Publication of CN113362369A publication Critical patent/CN113362369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a state detection method of a moving object, which comprises the following steps: acquiring an image frame sequence shot by a monocular camera, wherein an image frame of the image sequence comprises a moving object to be detected; carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected; inputting the image block sample into a feature extraction module, and outputting optical flow information and tracking information of the moving object to be detected; and inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected.

Description

State detection method and detection device for moving object
Technical Field
The invention relates to the technical field of traffic safety, in particular to the field of a state detection method and a state detection device of a moving object.
Background
In the past few years, intelligent driving systems have been rapidly developed, sensing the dynamic environment around an autonomous vehicle is a key task for implementing autonomous driving, and vehicle relative speed estimation is a basic function required by modern intelligent driving systems. Traditionally, dynamic environmental information around a vehicle is perceived by a distance sensor (e.g., LiDAR or millimeter wave radar).
Currently, the application of distance sensors (e.g., LiDAR or millimeter wave radar) is one of the most representative solutions in smart driving applications. These sensors can directly measure the distance and speed of other vehicles, but they are susceptible to adverse environmental factors such as rain, snow or fog.
Recent studies have shown that it is indeed possible, but still limited, to estimate self-motion and disparity maps of monocular camera images by the structure of motion in an autonomous driving scenario. Methods based on dynamic scene streams work well, but rely on stereo image datasets. Furthermore, they come at the cost of very high computational cost, so the estimation of time frame pairs on a single CPU core may take 5-10 minutes. In an autonomous driving scenario, the computational resources are typically very limited, which makes the object scene stream currently impractical in practice.
In addition, in dynamic application scenarios, predicting optical flow in the entire image is not a desirable option due to the imbalance in motion distribution between the static background and the moving vehicle. The predicted traffic from the full image flow network is eventually zero when the actual traffic of the running vehicle is only a small fraction of a pixel. Furthermore, due to perspective projection, a vehicle traveling at high speed may have a small optical flow; on the other hand, a low-speed close-distance vehicle may generate a large flow rate on an image. This situation extends the drawbacks of full image stream networks.
Disclosure of Invention
Technical problem to be solved
The invention discloses a state detection method and a state detection device for a moving object, which aim to at least partially solve the technical problems.
(II) technical scheme
To achieve the above object, an embodiment of the present invention provides a method for detecting a state of a moving object, including: acquiring an image frame sequence shot by a monocular camera, wherein an image frame of the image sequence comprises a moving object to be detected; carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected; inputting the image block sample into a feature extraction module, and outputting optical flow information and tracking information of the moving object to be detected; and inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected.
According to an embodiment of the present invention, performing target detection on the image frame sequence to obtain image block samples for characterizing the moving object to be detected includes: inputting the image frames in the image frame sequence into a target detector, and outputting a bounding box (B) for representing the moving object to be detectedi=(li,ti,ri,bi) Wherein l isiIs the left boundary coordinate, t, of the bounding boxiIs the upper boundary coordinate of the bounding box, riAs the right boundary coordinates of the bounding box, biThe lower boundary coordinate of the boundary frame;
cutting the boundary frame according to a preset cutting rule to obtain an image block corresponding to the boundary frame;
and constructing the image block samples according to the image block.
According to an embodiment of the present invention, clipping the bounding box according to a preset clipping rule to obtain an image block corresponding to the bounding box includes:
for bounding box Bi=(li,ti,ri,bi) The clipping region is defined as:
Figure BDA0003103882980000021
wherein the content of the first and second substances,
Figure BDA0003103882980000022
sigma is a spreading factorAnd (4) adding the active ingredients.
According to an embodiment of the present invention, the feature extraction module includes an optical flow calculation module and a tracker, inputting the image block samples into the feature extraction module, and outputting the optical flow information and the tracking information of the moving object to be detected includes: inputting the image block sample into the optical flow calculation module, and outputting the optical flow information of the moving object to be detected; and inputting the image block samples into the tracker and outputting the tracking information of the moving object to be detected.
According to an embodiment of the present invention, the optical flow information includes a magnitude and an angle of a pixel displacement vector of the moving object to be detected.
According to an embodiment of the present invention, the tracking information is obtained using a deep sort tracking algorithm.
According to an embodiment of the invention, the neural network model includes a long-term recursive convolutional network, a first multi-layered perceptron, and a second multi-layered perceptron.
According to an embodiment of the present invention, inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected includes: inputting the optical flow information into the long-term recursive convolutional network, and outputting a first one-dimensional vector; inputting the tracking information into the first multilayer perceptron, and outputting a second one-dimensional vector; pooling and stacking the first one-dimensional vector and the second one-dimensional vector to obtain a 1 x n-dimensional vector, wherein n is a dimension; inputting the 1 Xn-dimensional vector into the second multi-layer perceptron, and outputting the state information.
According to an embodiment of the present invention, the first multi-layered sensor is a 6-layered multi-layered sensor and the second multi-layered sensor is a 3-layered multi-layered sensor.
According to an embodiment of the present invention, a state detection apparatus of a mobile object implemented based on any one of the above methods includes: the acquisition module is used for acquiring an image frame sequence obtained by shooting by a monocular camera, wherein the image frame of the image sequence comprises a moving object to be detected; the detection module is used for carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected; the first processing module is used for inputting the image block samples into the feature extraction module and outputting optical flow information and tracking information of the moving object to be detected; and the second processing module is used for inputting the optical flow information and the tracking information into a neural network model and outputting the state information of the moving object to be detected.
(III) advantageous effects
The invention has the following beneficial effects by the detection method and the detection device for estimating the state of the moving object, which comprises the following steps:
(1) compared to conventional moving object detection algorithms, the present invention proposes a data-based method to estimate the relative velocity of a moving object using a monocular camera, thereby eliminating the need for expensive sensors such as lidar.
(2) The method proposed by the present invention is a lightweight architecture and depends on pre-computed optical characteristics, which are easy to compute and can be done in real time. The invention is based on calibration of the camera and therefore does not rely on any hardware for estimating the speed.
(3) The invention adopts a sampling mode taking a moving object as a center, and reduces the influence of unbalanced motion distribution and perspective projection on optical flow cue estimation.
Drawings
Fig. 1 is a flowchart illustrating a method for detecting a state of a moving object according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of boundary frame sampling of a moving object, i.e., a vehicle, according to an embodiment of the present invention.
Detailed Description
In order that the objects, technical solutions and advantages of the present invention will become more apparent, the present invention will be further described in detail with reference to the accompanying drawings in conjunction with the following specific embodiments.
To implement a monocular vision-based solution for ambient vehicle speed estimation, techniques for vehicle target detection and depth estimation are needed. Object detection is a technique required to predict the state (e.g., position, speed, and direction) of a surrounding vehicle.
To estimate ego-motion and surrounding vehicle velocity, optical flow is commonly used in computer vision as a method of depth estimation techniques, as it can be used to calculate velocity relative to the ground. Conventional optical flow estimation algorithms fall into two categories: feature-based methods and variational methods. Feature-based methods find image displacements by tracking features, including, for example, edges, corners and other locally good structures, and tracking them over a series of frames. The main limitation of this method is that it is difficult to estimate the flow in areas lacking salient features, and the variational method provides a more accurate estimate by using an energy function coupled with the assumption of constancy of brightness and spatial smoothness.
Compared with a distance sensor, the camera sensor can provide richer scene texture and structure information even under adverse conditions, and can be used as an economical, efficient and powerful substitute product for the distance sensor.
In view of economic efficiency and environmental suitability, the present invention proposes a data-based method to estimate the relative speed and position of a vehicle using a monocular camera.
An embodiment of the present invention provides a method for detecting a state of a moving object, including: acquiring an image frame sequence shot by a monocular camera, wherein an image frame of the image sequence comprises a moving object to be detected; carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected; inputting the image block sample into a feature extraction module, and outputting optical flow information and tracking information of the moving object to be detected; and inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected, as shown in fig. 1.
In the embodiment of the present invention, the moving object may be, for example, a running vehicle, a crowd, other moving objects, and the like, and the monocular camera may be disposed on the moving object. Taking a running vehicle as an example, a sampling strategy with the vehicle as a center can be adopted, and a monocular camera captures an image frame sequence to process the influence of unbalanced motion distribution and perspective projection. In particular, e.g. at t1Time-acquisition of an RGB image
Figure BDA0003103882980000053
At t2Time-acquisition of an RGB image
Figure BDA0003103882980000054
By the method, the current frame image can be estimated
Figure BDA0003103882980000055
Relative to the camera coordinate system.
According to an embodiment of the present invention, performing target detection on the image frame sequence to obtain image block samples for characterizing the moving object to be detected includes: inputting the image frames in the image frame sequence into a target detector, and outputting a bounding box B for representing the moving object to be detectedi=(li,ti,ri,bi) Wherein l isiIs the left boundary coordinate, t, of the bounding boxiIs the upper boundary coordinate of the bounding box, riAs the right boundary coordinates of the bounding box, biThe coordinates of the lower boundary of the bounding box, as shown in FIG. 2; clipping the bounding box (bounding box) according to a preset clipping rule to obtain an image block corresponding to the bounding box; and constructing the image block samples according to the image block.
In an embodiment of the present invention, the target detector may be, for example, an master-RCNN or a YOLO; li、ti、ri、biMay be coordinates in units of pixels.
According to an embodiment of the present invention, clipping the bounding box according to a preset clipping rule to obtain an image block corresponding to the bounding box includes:
for bounding box Bi=(li,ti,ri,bi) The clipping region is defined as:
Figure BDA0003103882980000051
wherein,
Figure BDA0003103882980000052
σ is the spreading factor.
In the embodiment of the present invention, taking the moving object as an example of a traveling vehicle, the boundary frame B of the traveling vehicle is acquiredi=(li,ti,ri,bi) Cutting the boundary frame according to the cutting rule to obtain an image block corresponding to the boundary frame, and adjusting the image block to a fixed size to obtain an image block sample; meanwhile, the corresponding image block is generated in the last frame at the same position in the above manner, and the image block is adjusted to the fixed size to obtain the image block sample.
According to an embodiment of the present invention, the feature extraction module includes an optical flow calculation module and a tracker, inputting the image block samples into the feature extraction module, and outputting the optical flow information and the tracking information of the moving object to be detected includes: inputting the image block sample into the optical flow calculation module, and outputting the optical flow information of the moving object to be detected; and inputting the image block samples into the tracker and outputting the tracking information of the moving object to be detected.
In the embodiment of the present invention, the optical flow specifically refers to an amount of movement of a pixel point representing the same object (object) in one frame of a video image to a next frame, and may be represented by a two-dimensional vector. Information in the spatiotemporal domain is extracted using dense optical flow, which can emphasize the motion information of the vehicle throughout the sequence of image frames.
In an embodiment of the present invention, the tracker may be a tracker with a built-in deep sort tracking algorithm, for example.
According to an embodiment of the present invention, the optical flow information includes a magnitude and an angle of a pixel displacement vector of the moving object to be detected.
In an embodiment of the present invention, the moving object may be, for example, a running vehicle, and the optical flow and depth features of the vehicle in the clipped and resized image block samples of the vehicle may be predicted by using a function of OpenCV, so as to obtain the magnitude and angle of a vector of a pixel displacement of the vehicle, and the pixel displacement is used as a basic feature of the moving object state information.
According to the embodiment of the invention, the tracking information can be obtained by using a Deepsort tracking algorithm.
In an embodiment of the present invention, the tracking information obtained according to the DeepSort tracking algorithm is, for example, a state vector, which is defined as:
Figure BDA0003103882980000061
wherein (x, y) represents the center point of the bounding box, h represents the height of the bounding box, r represents the aspect ratio of the bounding box,
Figure BDA0003103882980000062
the height information is the aspect ratio information corresponding to the center of the relative speed of the image block in the coordinate system. When the detector detects information associated with the target vehicle, the state vector of the target vehicle is updated using the detected bounding box.
According to an embodiment of the invention, the neural network model includes a long-term recursive convolutional network, a first multi-layered perceptron, and a second multi-layered perceptron.
According to an embodiment of the present invention, inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected includes: inputting the optical flow information into the long-term recursive convolutional network, and outputting a first one-dimensional vector; inputting the tracking information into the first multilayer perceptron, and outputting a second one-dimensional vector; pooling and stacking the first one-dimensional vector and the second one-dimensional vector to obtain a 1 x n-dimensional vector, wherein n is a dimension; inputting the 1 Xn-dimensional vector into the second multi-layer perceptron, and outputting the state information.
In an embodiment of the present invention, the moving object may be, for example, a vehicle in motion, and the state information output by the above method may be speed and position information of the vehicle.
According to an embodiment of the present invention, the first multi-layered sensor is a 6-layered multi-layered sensor and the second multi-layered sensor is a 3-layered multi-layered sensor.
Another embodiment of the present invention provides a state detection apparatus for a mobile object based on any one of the above methods, including: the acquisition module is used for acquiring an image frame sequence obtained by shooting by a monocular camera, wherein the image frame of the image sequence comprises a moving object to be detected; the detection module is used for carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected; the first processing module is used for inputting the image block samples into the feature extraction module and outputting optical flow information and tracking information of the moving object to be detected; and the second processing module is used for inputting the optical flow information and the tracking information into a neural network model and outputting the state information of the moving object to be detected.
It should be noted that the state detection device portion of the moving object in the embodiment of the present disclosure corresponds to the state detection method portion of the moving object in the embodiment of the present disclosure, and the description of the state detection device portion of the moving object specifically refers to the state detection method portion of the moving object, and is not repeated herein.
By the method and the device disclosed by the invention, a lightweight system structure is obtained, expensive hardware devices are not needed, the influence of severe weather, unbalanced motion distribution and perspective projection on optical flow clue estimation is reduced, and the detection of the state information of the moving object is improved.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A state detection method of a moving object, comprising:
acquiring an image frame sequence obtained by shooting by a monocular camera, wherein the image frames of the image frame sequence comprise a moving object to be detected;
performing target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected;
inputting the image block samples into a feature extraction module, and outputting optical flow information and tracking information of the moving object to be detected; and
and inputting the optical flow information and the tracking information into a neural network model, and outputting the state information of the moving object to be detected.
2. The method of claim 1, wherein the performing target detection on the sequence of image frames to obtain image block samples for characterizing the moving object to be detected comprises:
inputting image frames in the image frame sequence into a target detector, and outputting a bounding box B for representing the moving object to be detectedi=(li,ti,ri,bi) Wherein l isiIs the left boundary coordinate, t, of the bounding boxiIs the upper boundary coordinate of the bounding box, riAs the right boundary coordinates of the bounding box, biThe lower boundary coordinates of the boundary frame are obtained;
cutting the boundary box according to a preset cutting rule to obtain an image block corresponding to the boundary box;
and constructing the image block samples according to the image blocks.
3. The method according to claim 2, wherein clipping the bounding box according to a preset clipping rule to obtain an image block corresponding to the bounding box comprises:
for bounding box Bi=(li,ti,ri,bi) The clipping region is defined as:
Figure FDA0003103882970000011
wherein the content of the first and second substances,
Figure FDA0003103882970000012
σ is the spreading factor.
4. The method of claim 1, wherein the feature extraction module comprises an optical flow calculation module and a tracker, the inputting the image block samples into the feature extraction module, and the outputting the optical flow information and the tracking information of the moving object to be detected comprises:
inputting the image block samples into the optical flow calculation module, and outputting optical flow information of the moving object to be detected;
and inputting the image block samples into the tracker and outputting the tracking information of the moving object to be detected.
5. The method of claim 1, wherein the optical flow information comprises a magnitude and an angle of a pixel displacement vector of the object to be detected.
6. The method of claim 1, further comprising:
and obtaining the tracking information by using a Deepsort tracking algorithm.
7. The method of claim 1, wherein the neural network model comprises a long-term recursive convolutional network, a first multi-layered perceptron, and a second multi-layered perceptron.
8. The method of claim 7, wherein inputting the optical flow information and tracking information into a neural network model, outputting the state information of the moving object to be detected comprises:
inputting the optical flow information into the long-term recursive convolutional network, and outputting a first one-dimensional vector;
inputting the tracking information into the first multilayer perceptron, and outputting a second one-dimensional vector;
pooling and stacking the first one-dimensional vector and the second one-dimensional vector to obtain a 1 x n-dimensional vector, wherein n is a dimension;
and inputting the vector with the dimension of 1 x n into the second multilayer perceptron, and outputting the state information.
9. The method of claim 8, wherein the first multi-layered sensor is a 6-layered multi-layered sensor and the second multi-layered sensor is a 3-layered multi-layered sensor.
10. A state detection apparatus of a mobile object realized based on the method of any one of claims 1 to 9, comprising:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an image frame sequence obtained by shooting by a monocular camera, and the image frame of the image sequence comprises a moving object to be detected;
the detection module is used for carrying out target detection on the image frame sequence to obtain an image block sample for representing the moving object to be detected;
the first processing module is used for inputting the image block samples into the feature extraction module and outputting optical flow information and tracking information of the moving object to be detected; and
and the second processing module is used for inputting the optical flow information and the tracking information into a neural network model and outputting the state information of the moving object to be detected.
CN202110635008.5A 2021-06-07 2021-06-07 State detection method and detection device for moving object Pending CN113362369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635008.5A CN113362369A (en) 2021-06-07 2021-06-07 State detection method and detection device for moving object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635008.5A CN113362369A (en) 2021-06-07 2021-06-07 State detection method and detection device for moving object

Publications (1)

Publication Number Publication Date
CN113362369A true CN113362369A (en) 2021-09-07

Family

ID=77532989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635008.5A Pending CN113362369A (en) 2021-06-07 2021-06-07 State detection method and detection device for moving object

Country Status (1)

Country Link
CN (1) CN113362369A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033579A1 (en) * 2008-05-26 2010-02-11 Sanyo Electric Co., Ltd. Image Shooting Device And Image Playback Device
CN109035167A (en) * 2018-07-17 2018-12-18 北京新唐思创教育科技有限公司 Method, apparatus, equipment and the medium that multiple faces in image are handled
US20190042850A1 (en) * 2017-08-07 2019-02-07 Mitsubishi Electric Research Laboratories, Inc. Method and System for Detecting Actions in Videos using Contour Sequences
US20210112208A1 (en) * 2019-10-14 2021-04-15 Kt Corporation Device, method and computer program for extracting object from video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033579A1 (en) * 2008-05-26 2010-02-11 Sanyo Electric Co., Ltd. Image Shooting Device And Image Playback Device
US20190042850A1 (en) * 2017-08-07 2019-02-07 Mitsubishi Electric Research Laboratories, Inc. Method and System for Detecting Actions in Videos using Contour Sequences
CN109035167A (en) * 2018-07-17 2018-12-18 北京新唐思创教育科技有限公司 Method, apparatus, equipment and the medium that multiple faces in image are handled
US20210112208A1 (en) * 2019-10-14 2021-04-15 Kt Corporation Device, method and computer program for extracting object from video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D. K. JAIN等: ""Relative Vehicle Velocity Estimation Using Monocular Video Stream"", 《2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》 *
刘媛: ""基于深度学习的‘微动作’识别技术研究"", 《中国优秀博硕士学位论文全文数据库(硕士)·信息科技辑》 *

Similar Documents

Publication Publication Date Title
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
US20210042929A1 (en) Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN103325112B (en) Moving target method for quick in dynamic scene
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
US8503730B2 (en) System and method of extracting plane features
CN112215074A (en) Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN104794737A (en) Depth-information-aided particle filter tracking method
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN111292369B (en) False point cloud data generation method of laser radar
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN111783675A (en) Intelligent city video self-adaptive HDR control method based on vehicle semantic perception
CN114038193A (en) Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
Hadviger et al. Feature-based event stereo visual odometry
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
Li et al. Weak moving object detection in optical remote sensing video with motion-drive fusion network
Li et al. Vehicle object detection based on rgb-camera and radar sensor fusion
CN114842340A (en) Robot binocular stereoscopic vision obstacle sensing method and system
CN112884803B (en) Real-time intelligent monitoring target detection method and device based on DSP
Wu et al. Registration-based moving vehicle detection for low-altitude urban traffic surveillance
CN116105721B (en) Loop optimization method, device and equipment for map construction and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907

RJ01 Rejection of invention patent application after publication