CN113792598B - Vehicle-mounted camera-based vehicle collision prediction system and method - Google Patents

Vehicle-mounted camera-based vehicle collision prediction system and method Download PDF

Info

Publication number
CN113792598B
CN113792598B CN202110915606.8A CN202110915606A CN113792598B CN 113792598 B CN113792598 B CN 113792598B CN 202110915606 A CN202110915606 A CN 202110915606A CN 113792598 B CN113792598 B CN 113792598B
Authority
CN
China
Prior art keywords
motion
layer
scale
neural network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110915606.8A
Other languages
Chinese (zh)
Other versions
CN113792598A (en
Inventor
梁雪峰
张松
雷国栋
陈伟烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Technology of Xidian University
Original Assignee
Guangzhou Institute of Technology of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Technology of Xidian University filed Critical Guangzhou Institute of Technology of Xidian University
Priority to CN202110915606.8A priority Critical patent/CN113792598B/en
Publication of CN113792598A publication Critical patent/CN113792598A/en
Application granted granted Critical
Publication of CN113792598B publication Critical patent/CN113792598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle collision prediction system and method based on a vehicle-mounted camera, wherein the system comprises: the video data acquisition module acquires video data acquired by the vehicle-mounted camera; the motion scale change module acquires scale change characteristics of the video data according to the acquired optical flow change and foreground scale change of the video data; the LGMD pulse neural network module acquires motion approaching characteristics based on an LGMD pulse neural network model according to the acquired scale change characteristics; the motion direction crossing module acquires motion crossing characteristics of the motion direction crossing module and the foreground object according to the acquired video data; and the collision prediction module performs collision prediction processing based on a collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputs a collision time prediction result and a collision position prediction result. The method and the device can realize the collision prediction of the vehicle on the dangerous moving target, and are favorable for improving the safety of the automatic driving technology.

Description

Vehicle-mounted camera-based vehicle collision prediction system and method
Technical Field
The invention relates to the technical field of automatic driving, in particular to an automobile collision prediction system and method based on a vehicle-mounted camera.
Background
The existing automatic driving technology is mainly in a stage from L3 to L4, and the progress to a stage of complete automation of L5 is one of the urgent requirements of the development of the current automatic driving. Under the intelligent wave of vehicle driving, guarantee the security of driving, avoid potential collision is one of the automatic driving core technology. In the current automatic driving technology, a signal of vehicle motion usually consists of a plurality of parts, and meanwhile, the vehicle can run under scenes of different natural conditions, different road environment types, different traffic conditions and the like, so that it is still very difficult to predict the collision of dangerous moving objects in complex and changeable road scenes.
The technical means adopted for predicting the possible collision in advance in the current automatic driving is mainly realized by fusing a high-definition map with a radar technology, and the method is mainly divided into a millimeter wave radar and a laser radar. The millimeter wave radar works by utilizing electromagnetic waves with the wavelength of 1-10 mm, the wavelength of the millimeter wave is between centimeter waves and light waves, after the wavelength is converted into the frequency, the frequency of the millimeter wave is between 30GHz and 300GHz, and the frequency bands of the current domestic and foreign mainstream automobile millimeter wave radar are 24GHz (used for medium-short distance radar, 15-30 meters) and 77GHz (used for long-distance radar, 100-200 meters). Lidar is a collection of laser and radar, generally classified into pulsed and continuous wave types. The pulse type laser radar calculates the relative distance between vehicles by using the time interval; and the continuous wave laser radar obtains the target distance by calculating the phase difference between the reflected light and the reflected light. After the radar launches the laser, can take place to turn back when meetting the barrier, the light beam of returning passes through the inside receiver of radar and analyzes, finally handles at the treater through time and the measuring signal of turning back to generate accurate 3D map, go on reducing again to the environmental characteristic of surrounding, thereby further predict danger. Besides, the radar technology includes ultrasonic radar, infrared radar, and the like. Generally, the radar technology detects echoes, compares the echoes with a transmitted signal to obtain a pulse or phase difference value, calculates a time difference between the transmitted signal and the received signal, and combines a measurement signal to realize risk prediction. However, the current collision prediction technology based on radar technology has the following disadvantages: 1. the detected target cannot be identified with high precision; 2. the identification capability of the laser radar in rainy, snowy and foggy days is reduced; 3. the radar can not work effectively without high-precision 3D maps, the high-definition maps are created by surveying and mapping environments in advance, and 3D data and geographic information are collected, constructed and updated, so that the high-definition maps are high in cost, and the application environment of the radar technology is greatly limited. More importantly, if the high-definition map leaks, the safety problem of national geographic information may be caused, and effective limitation is required.
In addition to radar technology, there are also a few computer vision-based technologies that simply use cameras, such as the bayesian deep learning method mentioned in the papers such as Ustring, which predicts by calculating the probability of abnormality of each frame. But the existing pure visual collision prediction method has the following defects: the method such as Ustring only carries out two-classification detection on single driving video data actually, is not collision prediction in practical significance, and cannot be used for judging the collision danger of a single object, so that the method cannot be applied to an actual driving scene; 2. the model is difficult to make causal reasoning, cannot effectively process unseen new conditions, and needs a large amount of training data for training.
Therefore, a technical solution capable of predicting a collision of a vehicle with a dangerous moving object is provided to improve safety of an automatic driving technology, and to push the automatic driving technology to advance to the L5 stage, which is highly valuable in the field of automatic driving technology.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a vehicle collision prediction system and method based on a vehicle-mounted camera.
In a first aspect, the present invention provides a vehicle collision prediction system based on a vehicle-mounted camera, including:
the video data acquisition module is used for acquiring video data acquired by the vehicle-mounted camera and respectively transmitting the video data to the motion scale change module and the motion direction crossing module;
a motion scale change module for obtaining the scale change characteristics of the video data according to the optical flow change and the foreground scale change of the obtained video data, and outputting the scale change characteristics to an LGMD impulse neural network module (LGMD: lobula Giant motion Detector);
the LGMD pulse neural network module is used for acquiring motion approaching characteristics based on a trained LGMD pulse neural network model according to the acquired scale change characteristics and outputting the motion approaching characteristics to the collision prediction module;
the motion direction crossing module is used for acquiring motion crossing characteristics of the motion direction crossing module and the foreground object according to the acquired video data and outputting the motion crossing characteristics to the collision prediction module;
and the collision prediction module is used for performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion cross characteristic, and outputting a collision time prediction result and a collision position prediction result.
In one embodiment, the lens of the vehicle-mounted camera is aligned with the front of the vehicle and used for shooting video data under the driving vision of the vehicle.
In one embodiment, the video data obtaining module sequentially transmits a sequence of consecutive video frames to the motion scale changing module and the motion direction crossing module according to the received video data.
In one embodiment, the motion scale change module obtains a scale change feature of the video data according to an optical flow change and a foreground scale change of the obtained video data, and specifically includes:
acquiring a sequence of video frames transmitted by a data acquisition module;
based on the acquired video frame sequence, adopting an optical flow extraction neural network to calculate an optical flow field of the current video frame, performing local linear transformation on the optical flow field, performing determinant calculation on a matrix subjected to the local linear transformation to acquire a primary pixel-by-pixel scale change feature, and based on the acquired primary pixel-by-pixel scale change feature, calculating and acquiring a fine pixel-by-pixel optical flow scale change feature as a first scale feature by using a trained scale change neural network;
calculating the foreground motion information of adjacent frames of a foreground object in the video based on the acquired video frame sequence, and then acquiring the scale characteristics of the adjacent frames through a trained scale characteristic neural network; carrying out scale transformation on the scale features of the current frame to obtain the scale transformation features of the current frame, and carrying out contrast transformation on the obtained scale transformation features of the current frame and the scale features of the previous frame to obtain foreground scale transformation features serving as second scale features;
and the scale change features are obtained by performing fusion processing according to the first scale features and the second scale features.
In one embodiment, the LGMD neural network module, according to the obtained scale change feature, obtains a motion approach feature based on a trained LGMD neural network model, and specifically includes:
the well-trained LGMD pulse neural network model comprises a sensing layer, an exciting layer, a restraining layer, a merging layer, a side restraining layer and an LGMD cell layer; wherein the output of the sensing layer is respectively connected with the input of the excitation layer, the input of the inhibition layer and the input of the side inhibition layer; the output of the excitation layer and the output of the inhibition layer are respectively connected with the input of the convergence layer; the output of the confluent layer and the output of the lateral inhibition layer are respectively connected with the input of the LGMD cell layer;
the perception layer fuses scale change characteristics corresponding to multiple video frames in a time sequence; the excitation layer enhances the perceived motion characteristics; the inhibition layer carries out opposite inhibition on the perceived motion characteristic; the confluent layer has the functions of comprehensively balancing, enhancing and inhibiting two nerve impulses; the side inhibition layer inhibits the overall severe change of the scene motion characteristics; the LGMD cell layer outputs the nerve impulse generated by a moving object in the video data as a motion approaching characteristic according to the nerve impulse output by the convergence layer and the side inhibition layer.
In one embodiment, the motion direction crossing module, which obtains a motion crossing feature between itself and a foreground object according to an obtained video frame sequence, specifically includes:
acquiring a sequence of video frames transmitted by a data acquisition module;
acquiring horizontal rotation direction characteristics of a foreground object in a video frame through a trained three-dimensional target detection neural network;
acquiring the deflection angle characteristics of the vehicle motion in a video frame through a trained self motion estimation neural network;
and carrying out motion direction crossing judgment according to the acquired horizontal rotation direction characteristics and deflection angle characteristics, and calculating the included angle of the motion rotation vector to obtain pixel-by-pixel motion crossing characteristics.
In one embodiment, the collision prediction neural network model includes a temporal attention network and a spatial attention network;
the collision prediction module is used for performing collision prediction processing based on a trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputting a collision time prediction result and a collision position prediction result, and specifically comprises the following steps:
respectively acquiring motion approaching characteristics transmitted by the LGMD pulse neural network module and motion crossing characteristics transmitted by the motion direction crossing module;
the time attention network weights the key frame in a time domain according to the motion approaching characteristics to obtain a time attention weighting result;
the spatial attention network weights spatial abnormal positions in a spatial domain according to the motion cross characteristics to obtain a spatial attention weighting result;
and the collision prediction neural network model performs broadcast mechanism fusion on the space-time attention characteristics in a space-time fusion layer based on the time attention weighting result and the space attention weighting result, and outputs the specific time prediction and the specific space position prediction which are possible to generate collision.
In a second aspect, the invention provides a vehicle collision prediction method based on a vehicle-mounted camera, which comprises the following steps:
acquiring video data acquired by a vehicle-mounted camera;
acquiring scale change characteristics of the video data according to the optical flow change and the foreground scale change of the acquired video data;
obtaining a motion approach characteristic based on a trained LGMD pulse neural network model according to the obtained scale change characteristic;
acquiring the motion cross characteristics of the video data and the foreground object according to the acquired video data;
and performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputting a collision time prediction result and a collision position prediction result.
The invention has the beneficial effects that: the invention provides a vehicle collision prediction system and a vehicle collision prediction method based on a vehicle-mounted camera,
1. compared with radar and the like, the method completes the collision prediction of the automobile based on the video data acquired by the camera, and has lower overall cost.
2. Compared with the existing visual method, the method adopts the biological heuristic model structure design, has the biological interpretability and the natural evolution priori knowledge, is beneficial to improving the reliability of the automobile collision prediction, and simultaneously meets the requirement of being applied to the actual production application in the field of automatic driving.
3. The method converts the problem of estimating the scene depth into the target size change ratio in the image, designs the model by means of the biological structure, avoids the problem that the dangerous collision prediction needs to estimate the scene depth (distance) in the visual method, and simultaneously avoids the problem that the visual method has low accuracy of the depth estimation result.
4. Compared with the radar and the existing method, the method can simultaneously locate the target possibly causing the collision danger and predict the time point of the possible danger, and provides key danger perception information for further planning and decision-making.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a frame structure diagram of an embodiment of a vehicle-mounted camera-based vehicle collision prediction system according to the present invention;
FIG. 2 is a schematic diagram of a process for acquiring a first scale feature by a motion scale change module according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a process of acquiring a second scale feature by the motion scale change module according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of a process of obtaining a motion proximity feature by an LGMD spiking neural network module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a process of acquiring a motion crossing feature by a motion direction crossing module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a process of acquiring a predicted result of collision time and a predicted result of collision position by a collision prediction module according to an embodiment of the present invention.
Detailed Description
The invention is further described in connection with the following application scenarios.
In nature, the biological vision system can quickly and accurately respond to dangerous collision signals by means of mechanisms such as motion perception and the like. For example, locusts dancing intensively do not collide with each other. Meanwhile, the relationship between the predators and the predators exists between the organisms, the movement of the predators or the movement of other possible dangerous targets can be effectively predicted in advance under the condition of self movement, and the method has important influence on the predators to avoid dangers and improve the self viability. It has been found that it is the LGMD, lobula Giant Movement Detector, the lobular Giant Movement Detector, that is the neuron used by locusts to avoid collisions.
At present, some obstacle avoidance researches based on LGMD are applied to the field of unmanned aerial vehicles, but at present, the obstacle avoidance researches based on the unmanned aerial vehicles can only meet the requirements of scientific research levels. Different from the obstacle avoidance scene of an unmanned aerial vehicle, in the scene of automatic driving of the automobile, the road condition is more complicated, and meanwhile, the automatic driving involves the safety of passengers on the automobile, so the required safety performance is higher. Therefore, in the automatic driving technology, a computer vision method is blended by reasonably using the motion perception vision mechanism of living beings, so that the collision prediction of the vehicle on dangerous moving targets is realized, the safety of the automatic driving technology is improved, and the automatic driving technology is promoted to advance to the L5 stage, which is the key technical problem to be solved by the invention.
Referring to fig. 1, there is shown an in-vehicle camera-based vehicle collision prediction system, comprising:
the video data acquisition module is used for acquiring video data acquired by the vehicle-mounted camera and respectively transmitting the video data to the motion scale change module and the motion direction crossing module;
the motion scale change module is used for acquiring scale change characteristics of the video data according to the optical flow change and the foreground scale change of the acquired video data and outputting the scale change characteristics to the LGMD pulse neural network module;
the LGMD pulse neural network module is used for acquiring motion approaching characteristics based on a trained LGMD pulse neural network model according to the acquired scale change characteristics and outputting the motion approaching characteristics to the collision prediction module;
the motion direction crossing module is used for acquiring motion crossing characteristics of the motion direction crossing module and the foreground object according to the acquired video data and outputting the motion crossing characteristics to the collision prediction module;
and the collision prediction module is used for performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion cross characteristic and outputting a collision time prediction result and a collision position prediction result.
According to the embodiment, the dangerous collision prediction of the complex driving environment is carried out by using the video data recorded by the vehicle-mounted camera, whether other traffic vehicles collide with the driving vehicle can be predicted in advance, and the specific positions of potential collision objects can be predicted earlier, so that key decision information is provided for avoiding traffic accidents. On the premise of not using laser radar and millimeter wave radar, dangerous vehicle collision under the first-person driving visual angle can be predicted only through vehicle-mounted video data.
The system design of the invention comprises 4 main modules: the device comprises a motion scale change module, an LGMD pulse neural network module, a motion direction crossing module and a collision prediction module; inputting a vehicle-mounted real-time video into two extraction paths of typical collision characteristics, wherein one extraction path uses a motion scale change module and an LGMD pulse neural network module to extract scale change characteristics and motion approach characteristics of a moving object; and the other one uses a motion direction crossing module to extract the motion direction crossing characteristics of the moving object and the self vehicle. And finally outputting the time point of the predicted collision and the position of the object of the predicted collision by a collision prediction model based on the two typical collision characteristics under the combined action of a neural network attention mechanism and under the time prediction constraint and the space positioning constraint.
Wherein, the functions of the 4 main modules comprise: the motion scale change module estimates the size change ratio of each object in the complex driving scene through vehicle-mounted video data, and solves the problem of how to represent the motion scale of the object under the first-person visual angle; the LGMD pulse neural network module simulates the neural impulse of a biological vision system to an object which is close to the LGMD pulse neural network module through the scale representation of the motion of the object, and solves the problem of how to perceive the object which is close to the LGMD pulse neural network module quickly in a driving scene; the motion direction crossing module estimates the motion directions of each object and the self-driven vehicle in the driving scene through vehicle-mounted video data, judges the crossing of the motion directions and solves the problem of how to express the collision abnormal characteristics; a collision prediction module (a neural network learning module) performs weighted fusion of typical characteristic information on collision characteristics obtained by an LGMD pulse neural network module and a motion direction crossing module by using a space-time attention mechanism, and finally obtains predicted possible collision time and possible collision position, so that the problem of how to perform multi-dimensional collision characteristic fusion and the problem of jointly predicting the collision time and the collision position are solved.
For each of the modules presented above:
in one embodiment, the lens of the vehicle-mounted camera is aligned with the front of the vehicle and used for shooting video data under the driving vision of the vehicle.
In one embodiment, the onboard camera includes a single camera, dual cameras, or multiple cameras.
In one embodiment, the video data obtaining module sequentially transmits a sequence of consecutive video frames to the motion scale changing module and the motion direction crossing module according to the received video data.
In one embodiment, in the motion scale change module, obtaining a scale change feature of the video data according to an optical flow change and a foreground scale change of the obtained video data specifically includes:
acquiring a sequence of video frames transmitted by a data acquisition module;
referring to fig. 2, based on the acquired video frame sequence, an optical flow extraction neural network is used to calculate an optical flow field of the current video frame, local linear transformation is performed on the optical flow field, determinant calculation is performed on a matrix after the local linear transformation to obtain a primary pixel-by-pixel scale change feature, and based on the obtained primary pixel-by-pixel scale change feature, a trained scale change neural network is used to calculate and obtain a fine pixel-by-pixel optical flow scale change feature as a first scale feature;
referring to fig. 3, based on the obtained video frame sequence, foreground motion information of adjacent frames of a foreground object in the video is calculated, and then scale features of the adjacent frames are obtained through a trained scale feature neural network; carrying out scale transformation on the scale features of the current frame to obtain the scale transformation features of the current frame, and carrying out comparison transformation on the obtained scale transformation features of the current frame and the scale features of the previous frame to obtain foreground scale transformation features serving as second scale features;
and the scale change features are obtained by performing fusion processing according to the first scale features and the second scale features.
In one embodiment, the trained scale feature neural network is a scale pre-training weight network composed of convolutional layers, modified linear unit layers (relu), and pooling layers;
the scale transformation operation comprises local scaling and grid sampling combined processing;
and the contrast transformation operation comprises corresponding position division and corresponding position superposition processing.
In one scene, the expression forms of the optical flow field, the primary pixel-by-pixel scale change characteristic and the fine pixel-by-pixel optical flow scale change characteristic of the current video frame comprise a characteristic matrix; and the expression forms of the scale features of the adjacent frames, the scale transformation features of the current frame and the foreground scale change features comprise feature matrixes. And the scale change characteristics can be subjected to weighted superposition according to the first scale characteristics and the second scale characteristics to obtain a final characteristic value matrix as the scale change characteristics.
In one scenario, the motion scale change module is used for representing the change relation of the size proportion of the scale of the foreground object such as a vehicle in the automatic driving scenario. And respectively extracting the contrast change of the corresponding size in the image for different foreground objects, and representing the approaching/departing of each moving object to the self-driven vehicle and the equal motion scale relation. The input to the module is a continuous sequence of video frames. The module is mainly represented by two ways: 1. the method comprises the steps of firstly, calculating an optical flow field of a video through an optical flow extraction neural network to obtain a pixel-by-pixel motion change matrix in a scene, then, carrying out local linear transformation through the optical flow field, wherein the local matrix of each pixel point is linearly approximate to the optical flow change, carrying out matrix determinant calculation to express a primary pixel-by-pixel scale change characteristic matrix, and then, calculating a fine pixel-by-pixel optical flow scale change characteristic matrix by using a scale change neural network, wherein the scale change neural network is composed of encoder-decoder structures in jump connection. 2. Calculating foreground motion information of adjacent frames of the video object, and then obtaining a scale characteristic matrix of the adjacent frames through a scale characteristic neural network. The scale characteristic neural network is a scale pre-training weight network consisting of a convolutional layer, a modified linear unit layer and a pooling layer. And then carrying out scale transformation on the scale feature matrix of the current frame, and further carrying out contrast transformation on the scale feature matrix and the scale feature matrix obtained according to the previous frame to finally obtain the scale transformation feature matrix. The scale change feature matrix is a ratio of the size of a foreground object in a current frame to the size of a pixel-by-pixel object in a previous frame, and is used for representing the enlargement or reduction of the object scale.
Wherein the above scaling operation consists of a combination of local scaling and grid sampling. The contrast transformation operation consists of corresponding position division and corresponding position superposition.
The video data includes foreground information of vehicles, pedestrians, motorcycles, bicycles and the like in a traffic scene.
In one embodiment, referring to fig. 4, in the LGMD impulse neural network module, acquiring a motion approach feature based on a trained LGMD impulse neural network model according to the acquired scale variation feature specifically includes:
the well-trained LGMD pulse neural network model comprises a sensing layer, an exciting layer, a restraining layer, a merging layer, a side restraining layer and an LGMD cell layer; wherein the output of the sensing layer is respectively connected with the input of the exciting layer, the input of the suppressing layer and the input of the side suppressing layer; the outputs of the excitation layer and the inhibition layer are connected with the input of the convergence layer; the outputs of the confluent layer and the lateral inhibitory layer are connected with the input of the LGMD cell layer;
the perception layer fuses scale change characteristics corresponding to multiple video frames in a time sequence; the excitation layer enhances the perceived motion characteristics; the inhibition layer carries out opposite inhibition on the perceived motion characteristic; the confluent layer has the functions of comprehensively balancing and enhancing and inhibiting two nerve impulses; the side inhibition layer inhibits the overall violent change of the scene motion characteristics; and the LGMD cell layer outputs the nerve impulse close to the moving object in the video data as the motion close characteristic according to the nerve impulse output by the convergence layer and the side inhibition layer.
In one scenario, the representation of the scale-varying features captured in the LGMD impulse neural network module is included as a feature matrix, wherein the output from the LGMD impulse neural network module in the form of motion-approximated features is included as a feature vector.
In one scenario, the LGMD pulse neural network module calculates the motion variation relationship in the time domain when a moving object approaches. Considering that the effective input of the module is the scale change information, the scale change characteristic matrix output by the motion scale change module is input into the LGMD pulse neural network module. The module sequentially constructs a perception layer, an excitation layer, a suppression layer, a convergence layer, a lateral suppression layer and an LGMD cell layer by simulating collision perception neurons of the locust and using a calculation model of the pulse neural network, and establishes a biological heuristic motion perception pulse neural network model. The sensing layer can fuse multi-frame characteristics in a time sequence, the exciting layer enhances the sensed motion characteristics, the inhibiting layer inhibits the sensed motion characteristics oppositely, the converging layer integrates the effects of balance enhancement and nerve impulse inhibition, the side inhibiting layer inhibits the overall violent change of the scene motion characteristics, the LGMD cell layer inputs the nerve impulses output by the converging layer and the side inhibiting layer, and the LGMD cell layer finally outputs the nerve impulses close to a moving object. When an object is fast approaching, the whole LGMD pulse neural network module inputs a corresponding scale characteristic matrix, the module generates a violent impulse response on a time sequence, a motion approaching characteristic vector is extracted, and the fast approaching object can be predicted in advance. Where nerve impulses refer to the time-varying output response produced by the corresponding confluent, lateral inhibitory and LGMD cell layers.
In one embodiment, referring to fig. 5, the motion direction crossing module, acquiring a motion crossing feature between itself and a foreground object according to an acquired video frame sequence, specifically includes:
acquiring a sequence of video frames transmitted by a data acquisition module;
acquiring horizontal rotation direction characteristics of a foreground object in a video frame through a trained three-dimensional target detection neural network;
acquiring the deflection angle characteristics of the vehicle motion in a video frame through a trained self motion estimation neural network;
and carrying out motion direction crossing judgment according to the acquired horizontal rotation direction characteristics and deflection angle characteristics, and calculating the included angle of the motion rotation vector to obtain pixel-by-pixel motion crossing characteristics.
In one scenario, the representation of the horizontal rotation direction features of the foreground object comprises a feature vector; the expression form of the characteristic of the yaw angle of the motion of the vehicle comprises a characteristic vector; the representation of the pixel-by-pixel motion cross feature is included as a feature matrix.
In one scene, the motion direction crossing module is used for calculating the motion direction of objects such as foreground vehicles and the like in an automatic driving scene and judging the crossing with the motion direction of the object. The input to the module is a sequence of video frames. The module detects horizontal rotation direction vectors of a neural network foreground object through a three-dimensional target, uses self motion to estimate deflection angle vectors of self vehicle motion obtained by the neural network, finally carries out motion direction cross discrimination of the two vectors, calculates included angles of motion rotation vectors, and obtains a pixel-by-pixel space motion intersection feature matrix.
In one embodiment, referring to fig. 6, in the collision prediction module, according to the obtained motion approach feature and motion cross feature, performing collision prediction processing based on a trained collision prediction neural network model, and outputting a collision time prediction result and a collision position prediction result, specifically including: the collision prediction neural network model comprises a time attention network and a space attention network;
respectively acquiring motion approaching characteristics transmitted by the LGMD pulse neural network module and motion crossing characteristics transmitted by the motion direction crossing module;
the time attention network weights the key frame in a time domain according to the motion approaching characteristics to obtain a time attention weighting result;
the spatial attention network performs weighting of spatial abnormal positions in a spatial domain according to the motion cross characteristics to obtain a spatial attention weighting result;
and the collision prediction neural network model performs broadcast mechanism fusion on the space-time attention features in a space-time fusion layer based on the time attention weighting result and the space attention weighting result, and outputs the specific time prediction and the specific space position prediction which are possible to generate collision.
In one scenario, the spatial attention weighting result includes a spatial salient region feature; the temporal attention weighting result includes a corresponding temporal attention feature for each frame.
In one scenario, the collision prediction module predicts a specific time point and a specific spatial position at which a foreground object such as a vehicle may collide. The input of the module is a motion cross feature matrix of a time domain motion close feature vector and a space domain. The module uses a multi-task deep learning method, a time attention network inputs a motion approaching feature vector, key frame weighting is carried out on a time domain, a space attention network inputs a motion cross feature matrix, weighting of a space abnormal position is carried out on a space domain, broadcasting mechanism fusion is carried out on space-time attention features on a space-time fusion layer, and finally, specific time prediction and specific space position prediction which are possible to collide are output. The temporal attention network uses a single-layer learnable one-dimensional network parameter layer, and the spatial attention network uses a pyramid structure of a standard convolutional layer, a modified linear cell layer, and a pooling layer stack.
In the training process of the collision prediction neural network model, a time attention network inputs a motion approaching characteristic vector, a key frame is weighted in a time domain, a space attention network inputs a motion cross characteristic matrix, a space abnormal position is weighted in a space domain, and the neural network model is optimized in a combined mode by combining prior constraint conditions, time prediction constraint and space positioning constraint of two keys. Where the temporal prediction is constrained to be the period of time from the occurrence of the target until the actual collision occurs. The spatial prediction constraint is that the collision region is an image uniform region and the area is not less than a certain area threshold. The joint optimization refers to multi-task learning of a neural network, and is expressed as a weighted constraint of a plurality of loss functions.
In one scenario, the temporal motion close to the feature vector comprises a one-dimensional vector that produces a larger ascending gradient of output over time for the LGMD cell layer. The motion direction crossing feature of the spatial domain comprises a two-dimensional matrix of spatial positions which are crossed with the motion direction of the host vehicle possibly.
In one scenario, the broadcast mechanism is fused, specifically operative to multiply each frame of scalar values of the weighted one-dimensional temporal features obtained through the temporal attention network by all scalar values of the corresponding frame of the weighted two-dimensional spatial features passing through the attention network.
The system also comprises a model training module, wherein the model training module is used for training the motion scale change module, the LGMD pulse neural network module, the motion direction crossing module and the collision prediction module. The method specifically comprises the following steps: and (3) marking time sequence label information by using two types of driving data of collision and non-collision videos collected by the vehicle-mounted camera, and training a motion scale change module and an LGMD neural network module. And marking the moving directions of the vehicle and the self vehicle by using the driving data collected by the vehicle-mounted camera, and training a moving direction crossing module. And then using the non-collision and collision data, outputting and obtaining two types of characteristic information (including a motion approaching characteristic and a motion crossing characteristic) of the front module. And training a main neural network of the collision prediction module through the two types of features and the time label and the space label of the collision. And finally, performing combined fine tuning training of the three parts to obtain the corresponding neural networks in the modules.
In a second aspect, the invention provides a vehicle collision prediction method based on a vehicle-mounted camera, which comprises the following steps:
acquiring video data acquired by a vehicle-mounted camera;
acquiring scale change characteristics of the video data according to the acquired optical flow change and the foreground scale change of the video data;
obtaining a motion approach characteristic based on a trained LGMD pulse neural network model according to the obtained scale change characteristic;
acquiring the motion cross characteristics of the video data and the foreground object according to the acquired video data;
and performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputting a collision time prediction result and a collision position prediction result.
In one embodiment, the method further comprises training a motion scale variation module, an LGMD impulse neural network module, a motion direction crossing module, and a collision prediction module.
It should be noted that the vehicle collision prediction method based on the vehicle-mounted camera proposed in the present application further includes processing methods provided by each module and corresponding embodiments in the vehicle collision prediction system based on the vehicle-mounted camera, and a description of the present application is not repeated here.
The embodiment of the invention provides an automobile collision prediction system and method based on a vehicle-mounted camera, and the system and method have the following beneficial effects:
1. compared with radar and the like, the method completes the collision prediction of the automobile based on the video data acquired by the camera, and has lower overall cost.
2. In the technical scheme of performing collision prediction based on machine vision in the prior art, usually, an artificial neural network is directly adopted for training based on acquired video image data and a calibration result, although the trained artificial neural network can output a corresponding judgment result, it cannot be known (explained) what factor the result is judged based on, in the field of automatic driving, when the safety of a model cannot be explained, the model cannot be put into practical application actually related to life safety, and meanwhile, the further improvement and optimization of a collision prediction model are not facilitated. Compared with the existing visual method, the method adopts the biological heuristic model structure design, has the biological interpretability and the natural evolution priori knowledge, is beneficial to improving the reliability of the automobile collision prediction, and simultaneously meets the requirement of being applied to the actual production application in the field of automatic driving.
3. The method converts the problem of estimating the scene depth into the target size change ratio in the image, designs the model by means of the biological structure, avoids the problem that the dangerous collision prediction needs to estimate the scene depth (distance) in the visual method, and simultaneously avoids the problem that the visual method has low accuracy of the depth estimation result.
4. Compared with the radar and the existing method, the method can simultaneously locate the targets which possibly cause collision danger and predict the time points at which the danger possibly occurs, and provides key danger perception information for further planning and decision-making.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the flow of the embodiments may be accomplished by a computer program instructing the associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (6)

1. Vehicle collision prediction system based on-vehicle camera, its characterized in that includes:
the video data acquisition module is used for acquiring video data acquired by the vehicle-mounted camera and respectively transmitting the video data to the movement scale change module and the movement direction crossing module;
the motion scale change module is used for acquiring scale change characteristics of the video data according to the optical flow change and the foreground scale change of the acquired video data and outputting the scale change characteristics to the LGMD pulse neural network module; the method specifically comprises the following steps:
acquiring a sequence of video frames transmitted by a data acquisition module;
based on the obtained video frame sequence, adopting an optical flow extraction neural network to calculate an optical flow field of the current video frame, carrying out local linear transformation on the optical flow field, carrying out determinant calculation on a matrix subjected to local linear transformation to obtain primary pixel-by-pixel scale change characteristics, and based on the obtained primary pixel-by-pixel scale change characteristics, calculating and obtaining fine pixel-by-pixel optical flow scale change characteristics as first scale characteristics by using a trained scale change neural network;
calculating the foreground motion information of adjacent frames of a foreground object in the video based on the acquired video frame sequence, and then acquiring the scale characteristics of the adjacent frames through a trained scale characteristic neural network; carrying out scale transformation on the scale features of the current frame to obtain the scale transformation features of the current frame, and carrying out contrast transformation on the obtained scale transformation features of the current frame and the scale features of the previous frame to obtain foreground scale transformation features serving as second scale features;
the scale change features are obtained by performing fusion processing according to the first scale features and the second scale features;
the LGMD pulse neural network module is used for acquiring motion approaching characteristics based on a trained LGMD pulse neural network model according to the acquired scale change characteristics and outputting the motion approaching characteristics to the collision prediction module; the method specifically comprises the following steps:
the trained LGMD pulse neural network model comprises a sensing layer, an exciting layer, a restraining layer, a merging layer, a side restraining layer and an LGMD cell layer; wherein the output of the sensing layer is respectively connected with the input of the excitation layer, the input of the inhibition layer and the input of the side inhibition layer; the output of the excitation layer and the output of the inhibition layer are respectively connected with the input of the convergence layer; the output of the merging layer and the output of the side inhibiting layer are respectively connected with the input of the LGMD cell layer;
the perception layer fuses scale change characteristics corresponding to multiple video frames in a time sequence; the excitation layer enhances the perceived motion characteristics; the inhibition layer carries out opposite inhibition on the perceived motion characteristic; the confluent layer has the functions of comprehensively balancing and enhancing and inhibiting two nerve impulses; the side inhibition layer inhibits the overall violent change of the scene motion characteristics; the LGMD cell layer outputs nerve impulses generated by a moving object in the video data as motion approaching characteristics according to the nerve impulses output by the convergence layer and the side inhibition layer;
the motion direction crossing module is used for acquiring motion crossing characteristics of the motion direction crossing module and the foreground object according to the acquired video data and outputting the motion crossing characteristics to the collision prediction module;
and the collision prediction module is used for performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion cross characteristic and outputting a collision time prediction result and a collision position prediction result.
2. The vehicle camera-based collision prediction system for a vehicle as claimed in claim 1, wherein the lens of the vehicle camera is aimed at the front of the vehicle for capturing video data under the driving vision of the vehicle.
3. The vehicle-mounted camera-based vehicle collision prediction system according to claim 1, wherein the video data acquisition module sequentially transmits a continuous video frame sequence to the motion scale change module and the motion direction crossing module in time order according to the received video data.
4. The vehicle-mounted camera-based vehicle collision prediction system according to claim 1, wherein the motion direction crossing module acquires a motion crossing feature between itself and a foreground object according to the acquired video frame sequence, and specifically includes:
acquiring a sequence of video frames transmitted by a data acquisition module;
acquiring horizontal rotation direction characteristics of a foreground object in a video frame through a trained three-dimensional target detection neural network;
acquiring the deflection angle characteristics of the vehicle motion in the video frame through the trained self motion estimation neural network;
and carrying out motion direction crossing judgment according to the acquired horizontal rotation direction characteristics and deflection angle characteristics, and calculating the included angle of the motion rotation vector to obtain pixel-by-pixel motion crossing characteristics.
5. The vehicle-mounted camera-based vehicle collision prediction system according to claim 4, wherein the collision prediction neural network model comprises a temporal attention network and a spatial attention network;
the collision prediction module is used for performing collision prediction processing based on a trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputting a collision time prediction result and a collision position prediction result, and specifically comprises the following steps:
respectively acquiring motion approaching characteristics transmitted by the LGMD pulse neural network module and motion crossing characteristics transmitted by the motion direction crossing module;
the time attention network weights the key frame in a time domain according to the motion approaching characteristics to obtain a time attention weighting result;
the spatial attention network performs weighting of spatial abnormal positions in a spatial domain according to the motion cross characteristics to obtain a spatial attention weighting result;
and the collision prediction neural network model performs broadcast mechanism fusion on the space-time attention features in a space-time fusion layer based on the time attention weighting result and the space attention weighting result, and outputs the specific time prediction and the specific space position prediction which are possible to generate collision.
6. The vehicle collision prediction method based on the vehicle-mounted camera is characterized by comprising the following steps:
acquiring video data acquired by a vehicle-mounted camera;
acquiring scale change characteristics of the video data according to the optical flow change and the foreground scale change of the acquired video data; the method specifically comprises the following steps:
acquiring a sequence of video frames transmitted by a data acquisition module;
based on the acquired video frame sequence, adopting an optical flow extraction neural network to calculate an optical flow field of the current video frame, performing local linear transformation on the optical flow field, performing determinant calculation on a matrix subjected to the local linear transformation to acquire a primary pixel-by-pixel scale change feature, and based on the acquired primary pixel-by-pixel scale change feature, calculating and acquiring a fine pixel-by-pixel optical flow scale change feature as a first scale feature by using a trained scale change neural network;
calculating the foreground motion information of adjacent frames of a foreground object in the video based on the acquired video frame sequence, and then acquiring the scale characteristics of the adjacent frames through a trained scale characteristic neural network; carrying out scale transformation on the scale features of the current frame to obtain the scale transformation features of the current frame, and carrying out contrast transformation on the obtained scale transformation features of the current frame and the scale features of the previous frame to obtain foreground scale transformation features serving as second scale features;
the scale change features are obtained by performing fusion processing according to the first scale features and the second scale features;
obtaining a motion approach characteristic based on a trained LGMD pulse neural network model according to the obtained scale change characteristic; the method specifically comprises the following steps:
the trained LGMD pulse neural network model comprises a sensing layer, an exciting layer, a restraining layer, a merging layer, a side restraining layer and an LGMD cell layer; wherein the output of the sensing layer is respectively connected with the input of the excitation layer, the input of the inhibition layer and the input of the side inhibition layer; the output of the excitation layer and the output of the inhibition layer are respectively connected with the input of the convergence layer; the output of the confluent layer and the output of the lateral inhibition layer are respectively connected with the input of the LGMD cell layer;
the perception layer fuses scale change characteristics corresponding to multiple video frames in a time sequence; the excitation layer enhances the perceived motion characteristics; the inhibition layer carries out opposite inhibition on the perceived motion characteristics; the confluent layer has the functions of comprehensively balancing and enhancing and inhibiting two nerve impulses; the side inhibition layer inhibits the overall severe change of the scene motion characteristics; the LGMD cell layer outputs nerve impulses generated by a moving object in the video data as motion approaching characteristics according to the nerve impulses output by the convergence layer and the side inhibition layer;
acquiring the motion cross characteristics of the video data and the foreground object according to the acquired video data;
and performing collision prediction processing based on the trained collision prediction neural network model according to the acquired motion approaching characteristic and the motion crossing characteristic, and outputting a collision time prediction result and a collision position prediction result.
CN202110915606.8A 2021-08-10 2021-08-10 Vehicle-mounted camera-based vehicle collision prediction system and method Active CN113792598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110915606.8A CN113792598B (en) 2021-08-10 2021-08-10 Vehicle-mounted camera-based vehicle collision prediction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110915606.8A CN113792598B (en) 2021-08-10 2021-08-10 Vehicle-mounted camera-based vehicle collision prediction system and method

Publications (2)

Publication Number Publication Date
CN113792598A CN113792598A (en) 2021-12-14
CN113792598B true CN113792598B (en) 2023-04-14

Family

ID=78875855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110915606.8A Active CN113792598B (en) 2021-08-10 2021-08-10 Vehicle-mounted camera-based vehicle collision prediction system and method

Country Status (1)

Country Link
CN (1) CN113792598B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604434A (en) * 2022-05-13 2023-01-13 深圳时识科技有限公司(Cn) Ultra-low power consumption monitoring device and method
CN115431968B (en) * 2022-11-07 2023-01-13 北京集度科技有限公司 Vehicle controller, vehicle and vehicle control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688877A (en) * 2018-07-05 2020-01-14 杭州海康威视数字技术股份有限公司 Danger early warning method, device, equipment and storage medium
CN112349144A (en) * 2020-11-10 2021-02-09 中科海微(北京)科技有限公司 Monocular vision-based vehicle collision early warning method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8232872B2 (en) * 2009-12-03 2012-07-31 GM Global Technology Operations LLC Cross traffic collision alert system
US20170293837A1 (en) * 2016-04-06 2017-10-12 Nec Laboratories America, Inc. Multi-Modal Driving Danger Prediction System for Automobiles
CN107972662B (en) * 2017-10-16 2019-12-10 华南理工大学 Vehicle forward collision early warning method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688877A (en) * 2018-07-05 2020-01-14 杭州海康威视数字技术股份有限公司 Danger early warning method, device, equipment and storage medium
CN112349144A (en) * 2020-11-10 2021-02-09 中科海微(北京)科技有限公司 Monocular vision-based vehicle collision early warning method and system

Also Published As

Publication number Publication date
CN113792598A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
US10943355B2 (en) Systems and methods for detecting an object velocity
US20240144010A1 (en) Object Detection and Property Determination for Autonomous Vehicles
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
US10860896B2 (en) FPGA device for image classification
US10310087B2 (en) Range-view LIDAR-based object detection
US10108867B1 (en) Image-based pedestrian detection
US20200217950A1 (en) Resolution of elevation ambiguity in one-dimensional radar processing
JP7239703B2 (en) Object classification using extraterritorial context
Bai et al. Robust detection and tracking method for moving object based on radar and camera data fusion
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
Jebamikyous et al. Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges
KR20220119396A (en) Estimation of object size using camera map and/or radar information
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
US11827214B2 (en) Machine-learning based system for path and/or motion planning and method of training the same
US20230213643A1 (en) Camera-radar sensor fusion using local attention mechanism
CN113792598B (en) Vehicle-mounted camera-based vehicle collision prediction system and method
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Wang et al. [Retracted] Sensor‐Based Environmental Perception Technology for Intelligent Vehicles
CN115083088A (en) Railway perimeter intrusion early warning method
CN115879060A (en) Multi-mode-based automatic driving perception method, device, equipment and medium
CN117387647A (en) Road planning method integrating vehicle-mounted sensor data and road sensor data
CN115115084A (en) Predicting future movement of an agent in an environment using occupancy flow fields
Xu et al. [Retracted] Multiview Fusion 3D Target Information Perception Model in Nighttime Unmanned Intelligent Vehicles
CN113569803A (en) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
CN112766100A (en) 3D target detection method based on key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant