CN113822221A - Target detection method based on antagonistic neural network and multi-sensor fusion - Google Patents

Target detection method based on antagonistic neural network and multi-sensor fusion Download PDF

Info

Publication number
CN113822221A
CN113822221A CN202111177982.8A CN202111177982A CN113822221A CN 113822221 A CN113822221 A CN 113822221A CN 202111177982 A CN202111177982 A CN 202111177982A CN 113822221 A CN113822221 A CN 113822221A
Authority
CN
China
Prior art keywords
data
millimeter wave
wave radar
neural network
night
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111177982.8A
Other languages
Chinese (zh)
Inventor
***
王展
张自宇
栾众楷
赵万忠
王春燕
周冠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tianhang Intelligent Equipment Research Institute Co ltd
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing Tianhang Intelligent Equipment Research Institute Co ltd
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tianhang Intelligent Equipment Research Institute Co ltd, Nanjing University of Aeronautics and Astronautics filed Critical Nanjing Tianhang Intelligent Equipment Research Institute Co ltd
Priority to CN202111177982.8A priority Critical patent/CN113822221A/en
Publication of CN113822221A publication Critical patent/CN113822221A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection method based on an antagonistic neural network and multi-sensor fusion, and belongs to the field of target detection and environment perception of unmanned technology. In order to solve the problem that night obstacles cannot be accurately detected in the prior art, the method provided by the invention adopts a millimeter wave radar and a vision sensor to perform fusion detection, and adopts an antagonistic neural network to train the network. The millimeter wave radar data processing mainly comprises data clustering and effective target primary screening. Unifying time and space coordinate systems of the preprocessed millimeter wave radar data and the daytime image output by the countermeasure network, and subtracting the registered same-scene pre-sampling data from the millimeter wave preprocessed data and the daytime image output by the countermeasure network to obtain final processed data of the millimeter wave radar data and the daytime image. And then fusing the data of the two parts to obtain a final night obstacle detection result. The invention greatly improves the robustness of target detection at night.

Description

Target detection method based on antagonistic neural network and multi-sensor fusion
Technical Field
The invention belongs to the field of target detection and environment perception of unmanned technology, and particularly relates to a sensor fusion target detection method under night working conditions.
Background
With the continuous pursuit of people for quality of life and the continuous innovation of technology, the unmanned technology has been rapidly developed, and as a key technology in the unmanned technology, the accuracy of target detection greatly determines the safety of unmanned driving. At present, three sensors mainly adopted by a target detection technology are respectively a laser radar, a millimeter wave radar and a camera. The camera price is relatively low, the image recognition technology based on the camera is rapidly developed, the principle of the method is mainly to train a model in advance, set the target type, extract and analyze the features of the picture acquired by the camera, and rapidly classify and detect the target in the picture. However, there are still some fatal problems in image data, such as difficulty in detecting and classifying objects in the case of poor lighting conditions or severe exposure. The lidar is a new development trend in recent years, and has become a main development trend in the future due to the characteristics of strong robustness, large data volume and no influence of illumination, but has certain limitations in the current development due to the fact that the lidar is relatively expensive, has too large data volume, is difficult to process in real time, and can be seriously influenced by rain and fog weather. The millimeter wave radar is widely applied, and the millimeter wave radar is widely applied to a plurality of fields such as unmanned vehicles, unmanned planes and intelligent transportation at present in the early millimeter wave radar main application and military field. The millimeter wave radar has the main advantages that the penetrability is strong, the millimeter wave radar can not be influenced by dust, rain and dew, and the millimeter wave radar is not influenced by illumination, can work under severe weather, and can work all weather, but the main problem is that the precision is not high, and the resolution ratio is lower.
At present, researches on target detection mainly focus on the situation that the illumination condition is good, however, in the future development trend of unmanned driving, all-weather detection tasks of vehicles need to be met, so that the road vehicles under the condition of poor illumination condition such as night still need to be detected, and at present, some new researches on night target detection have been provided, for example, the invention patent number CN111965636A in china is named as a night target detection method based on millimeter wave radar and vision fusion, wherein millimeter wave radar and a vision sensor are adopted, a region of interest is extracted by the millimeter wave radar, an image corresponding to the region of interest extracted by the millimeter wave radar is brightened, and then, a deep learning method is adopted for detection and classification. Although the millimeter wave radar and the vision sensor are adopted in the patent, the advantages of the millimeter wave radar and the vision sensor in detection under the illumination condition are not exerted, the region of interest is extracted only by adopting the millimeter wave radar, and missing detection is easily caused due to the accuracy problem of the millimeter wave radar. Meanwhile, the image of the region of interest is highlighted and then detected, which is prone to generate data distortion and cause false detection. The chinese patent No. CN106251355B, entitled "a detection method fusing a visible light image and a corresponding night vision infrared image", wherein the visible light image and the night vision infrared image are used in the detection of the night vision image. The detection method comprises the steps of respectively processing the visible light image and the infrared image to obtain a saliency image, and then fusing. However, the distance parameter between the obstacle target and the obstacle target cannot be obtained only by using the visible light image and the night vision image, and the defect still exists in the unmanned application.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention aims to provide a detection method based on the fusion of an antagonistic neural network and multiple sensors, so as to solve the problem that the obstacle at night cannot be accurately detected in the prior art; the method adopts a millimeter wave radar and a vision sensor to carry out fusion detection, and simultaneously samples good illuminance barrier-free road images of different road sections as a pre-sampling database. The mode of processing the image sampled by the night vision sensor is to adopt an antagonistic neural network, train the network, convert the night image as the input of the antagonistic neural network, and output the daytime image in the same scene. The processing of the millimeter wave radar data mainly comprises data clustering and effective target primary screening. After the preprocessing data of the vision sensor and the millimeter wave radar sensor are obtained, comparing the output daytime picture of the antagonistic neural network with the pre-sampling database of the current road section, screening out the same scene view, and carrying out image registration. And then unifying time and space coordinate systems of the preprocessed millimeter wave radar data and the daytime image output by the countermeasure network, and subtracting the registered same-scene pre-sampling data from the millimeter wave preprocessed data and the daytime image output by the countermeasure network to obtain final processed data of the millimeter wave radar data and the daytime image. And then fusing the data of the two parts to obtain a final night obstacle detection result.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention relates to a night target detection method based on the fusion of an anti-neural network and multiple sensors, which comprises the following steps:
step 1): establishing an obstacle-free road pre-sampling database under a good illumination condition;
step 2): sampling night driving data of a millimeter wave radar and a vision sensor;
further, the step 2) specifically includes:
step 21): and collecting millimeter wave radar point cloud data of the unmanned automobile in the night driving process, wherein the millimeter wave radar point cloud data comprises point cloud distribution and the distance between the point cloud and the point cloud.
Step 22): RGB image data of the unmanned automobile in the driving process at night are collected.
Step 3): and training the antagonistic neural network and the target detection network by utilizing the established database, so that the antagonistic neural network can generate and output daytime images in the same scene after inputting the nighttime images. Enabling the target detection network to identify and classify obstacles in the image.
Further, the step 3) specifically includes:
step 31): training the antagonistic neural network requires day and night databases in the same scene, but acquiring day and night images in the same scene is basically impossible. Therefore, a vehicle-mounted camera is used for directly acquiring daytime images to establish a daytime data set A, and then the contrast and brightness of the images in the data set A are adjusted to obtain a nighttime same-scene data set B. The daytime dataset a is used as the true sample input to the discriminator in the process of training the antagonistic neural network model, and the nighttime dataset matching it is used as the input to the generator. The network completes training by small random gradient descent (SGD), with a learning rate set to 0.0002, with a small batch set to 128 in advance. And (3) adopting normal distribution with zero mean value and 0.02 standard deviation as an initialization method of each layer of weight parameters.
Step 32): the target detection network adopts a Yolov3 network, and since a database required by the Yolov3 network needs to be labeled, a target detection data set C needs to be established and labeled. Pictures in the data set adopt images taken by a vehicle camera and some high-quality images found by a network to expand the content of the database. And then, making a data set by using the VOC, and training the network through the data set.
Step 4): and clustering the millimeter wave radar sampling data.
Step 5): comparing the generated daytime image with a pre-sampling database, screening out pre-sampling data in the same scene, and carrying out image registration;
step 6): and (3) performing space-time unification on the millimeter wave radar data and the visual data, subtracting the millimeter wave radar data and the visual data from the pre-sampling data after registration, and detecting the millimeter wave radar data and the visual data after subtraction.
Further, the step 6) specifically includes:
step 61): the space-time unification comprises space synchronization and time synchronization, wherein the space synchronization is to transfer and transform point cloud data collected by the millimeter wave radar from a radar coordinate system to a pixel coordinate system; the time synchronization is that a pulse generator is utilized, a trigger frequency is set according to the scanning frequency of the millimeter wave radar, millimeter wave radar and camera data of a current frame are acquired by triggering each time, and if no data exists in an image at the moment, interpolation calculation is carried out by utilizing data of front and rear moments.
Step 62): obtaining millimeter wave radar data and visual data after space-time unification according to the step 61), then taking the pre-sampling data after registration obtained in the step 5) as background data, respectively subtracting the millimeter wave radar data and the visual data from the pre-sampling data, and detecting the millimeter wave radar data and the visual data by using a trained yolov3 neural network.
Step 7): and carrying out target matching and data fusion on the two detection data, and outputting a final night obstacle detection result.
Further, the pre-sampling database mentioned in step 1) is a road picture taken by a vehicle-mounted camera during the driving of the vehicle on the road under good weather conditions. The vehicle speeds of the expressway, the first-level road, the second-level road, the third-level road and the fourth-level road are respectively 100km/h, 80km/h, 70km/h, 60km/h and 40km/h, and the sampling frequency is 30 FPS. After the road data is sampled, obstacle information such as vehicles, pedestrians and the like in the image is removed in a manual mode, and only road background information is reserved.
Further, the millimeter wave radar clustering strategy mentioned in the step 4) is as follows:
1. selecting any point in a frame of radar scanning data as an initial value of a clustering center;
2. calculating a certain data point P in the same frame of radar scanning datai+1(Ri+1i+1) And a cluster center Pi(Rii) Manhattan distance Δ R therebetweeniAnd speed deviation DeltaVi:△Ri=|Ri+1-Ri|,△Vi=|Vi+1-Vi|。
3. The distance DeltaR between two pointsiVelocity deviation Δ ViRespectively corresponding to a set threshold value Rth、VthComparing, if the two points are less than the threshold value, determining that the two points belong to the same cluster, otherwise, determining that the two points belong to different clusters, and calculating the average value of the two points according to the Pi+1(Ri+1i+1) And establishing a new cluster for the cluster center.
4. If there are already multiple cluster centers, P needs to be calculated sequentiallyi+1(Ri+1i+1) And the distance and speed deviation DeltaR and DeltaV of each cluster center, if the distance and speed deviation of the point and all cluster centers meet DeltaR>Rth,△V>VthIf so, establishing a new cluster by taking the point as a cluster center; otherwise, the point and the nearest cluster center are considered to belong to the same cluster.
5. And repeating the steps until all the radar data points of the same frame are processed.
Further, the image registration mentioned in the step 5) includes three steps, key point detection and feature description, feature matching, and image transformation.
Further, the visual data mentioned in the step 6) is a night image shot by the vehicle-mounted camera, and a daytime view generated by the antagonistic neural network generator.
Further, the target matching mentioned in the step 7) includes calculating target similarity, target matching of different sensors, and target matching of the same sensor and historical targets.
Further, the data fusion mentioned in step 7) is a linear combination method based on weight coefficients, and since the observation of the target by the millimeter wave radar and the camera is relatively independent, covariance matrices of different sensors are respectively selected to perform weighting processing on the target parameters, and the calculation method is as follows:
Xij=Pj(Pi+Pj)-1Xi+Pi(Pj+Pi)-1Xj
P=Pi(Pi+Pj)-1Pj
wherein, XiRepresenting a relevant parameter, P, of the ith millimeter wave radar target obtained through state estimationiRepresenting a corresponding radar covariance matrix; xjRepresenting the relevant parameter, P, of the j-th camera target obtained by state estimationjRepresenting a respective camera covariance matrix; xijAnd representing the target parameters obtained through the fusion processing.
The invention has the beneficial effects that:
aiming at the problem that the obstacle is difficult to detect at night, the millimeter wave radar and the vision sensor are adopted for fusion detection, and meanwhile, the problem that the robustness of the obstacle detected by the vision sensor at night is poor is considered, the countering neural network is adopted, and the network is trained, so that when night data is input, a generator can be used for generating a corresponding daytime image, and then the daytime image and the millimeter wave radar data are subjected to fusion detection, and the detection robustness is greatly improved.
Drawings
FIG. 1 is a flow chart of multi-sensor fusion detection based on an antagonistic neural network;
fig. 2 is a schematic diagram of an antagonistic neural network.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
examples
The method adopts a millimeter wave radar and a vision sensor to carry out fusion detection, and simultaneously samples good illuminance barrier-free road images of different road sections as a pre-sampling database. The mode of processing the image sampled by the night vision sensor is to adopt an antagonistic neural network, train the network, convert the night image as the input of the antagonistic neural network, and output the daytime image in the same scene. The processing of the millimeter wave radar data mainly comprises data clustering and effective target primary screening. After the preprocessing data of the vision sensor and the millimeter wave radar sensor are obtained, comparing the output daytime picture of the antagonistic neural network with the pre-sampling database of the current road section, screening out the same scene view, and carrying out image registration. And then unifying time and space coordinate systems of the preprocessed millimeter wave radar data and the daytime image output by the countermeasure network, and subtracting the registered same-scene pre-sampling data from the millimeter wave preprocessed data and the daytime image output by the countermeasure network to obtain final processed data of the millimeter wave radar data and the daytime image. And then fusing the data of the two parts to obtain a final night obstacle detection result.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention relates to a night target detection method based on the fusion of an anti-neural network and multiple sensors, which comprises the following steps:
step 1): establishing an obstacle-free road pre-sampling database under a good illumination condition;
step 2): sampling night driving data of a millimeter wave radar and a vision sensor;
further, the step 2) specifically includes:
step 21): and collecting millimeter wave radar point cloud data of the unmanned automobile in the night driving process, wherein the millimeter wave radar point cloud data comprises point cloud distribution and the distance between the point cloud and the point cloud.
Step 22): RGB image data of the unmanned automobile in the driving process at night are collected.
Step 3): and training the antagonistic neural network and the target detection network by utilizing the established database, so that the antagonistic neural network can generate and output daytime images in the same scene after inputting the nighttime images. Enabling the target detection network to identify and classify obstacles in the image.
Further, the step 3) specifically includes:
step 31): training the antagonistic neural network requires day and night databases in the same scene, but acquiring day and night images in the same scene is basically impossible. Therefore, a vehicle-mounted camera is used for directly acquiring daytime images to establish a daytime data set A, and then the contrast and brightness of the images in the data set A are adjusted to obtain a nighttime same-scene data set B. The daytime dataset a is used as the true sample input to the discriminator in the process of training the antagonistic neural network model, and the nighttime dataset matching it is used as the input to the generator. The network completes training by small random gradient descent (SGD), with a learning rate set to 0.0002, with a small batch set to 128 in advance. And (3) adopting normal distribution with zero mean value and 0.02 standard deviation as an initialization method of each layer of weight parameters.
Step 32): the target detection network adopts a Yolov3 network, and since a database required by the Yolov3 network needs to be labeled, a target detection data set C needs to be established and labeled. Pictures in the data set adopt images taken by a vehicle camera and some high-quality images found by a network to expand the content of the database. And then, making a data set by using the VOC, and training the network through the data set.
Step 4): and clustering the millimeter wave radar sampling data.
Step 5): comparing the generated daytime image with a pre-sampling database, screening out pre-sampling data in the same scene, and carrying out image registration;
step 6): and (3) performing space-time unification on the millimeter wave radar data and the visual data, subtracting the millimeter wave radar data and the visual data from the pre-sampling data after registration, and detecting the millimeter wave radar data and the visual data after subtraction.
Further, the step 6) specifically includes:
step 61): the space-time unification comprises space synchronization and time synchronization, wherein the space synchronization is to transfer and transform point cloud data collected by the millimeter wave radar from a radar coordinate system to a pixel coordinate system; the time synchronization is that a pulse generator is utilized, a trigger frequency is set according to the scanning frequency of the millimeter wave radar, millimeter wave radar and camera data of a current frame are acquired by triggering each time, and if no data exists in an image at the moment, interpolation calculation is carried out by utilizing data of front and rear moments.
Step 62): obtaining millimeter wave radar data and visual data after space-time unification according to the step 61), then taking the pre-sampling data after registration obtained in the step 5) as background data, respectively subtracting the millimeter wave radar data and the visual data from the pre-sampling data, and detecting the millimeter wave radar data and the visual data by using a trained yolov3 neural network.
Step 7): and carrying out target matching and data fusion on the two detection data, and outputting a final night obstacle detection result.
Further, the pre-sampling database mentioned in step 1) is a road picture taken by a vehicle-mounted camera during the driving of the vehicle on the road under good weather conditions. The vehicle speeds of the expressway, the first-level road, the second-level road, the third-level road and the fourth-level road are respectively 100km/h, 80km/h, 70km/h, 60km/h and 40km/h, and the sampling frequency is 30 FPS. After the road data is sampled, obstacle information such as vehicles, pedestrians and the like in the image is removed in a manual mode, and only road background information is reserved.
Further, the millimeter wave radar clustering strategy mentioned in the step 4) is as follows:
1. selecting any point in a frame of radar scanning data as an initial value of a clustering center;
2. calculating a certain data point P in the same frame of radar scanning datai+1(Ri+1i+1) And a cluster center Pi(Rii) Manhattan distance Δ R therebetweeniAnd speed deviation DeltaVi:△Ri=|Ri+1-Ri|,△Vi=|Vi+1-Vi|。
3. The distance DeltaR between two pointsiVelocity deviation Δ ViRespectively corresponding to a set threshold value Rth、VthComparing, if the two points are less than the threshold value, determining that the two points belong to the same cluster, otherwise, determining that the two points belong to different clusters, and calculating the average value of the two points according to the Pi+1(Ri+1i+1) And establishing a new cluster for the cluster center.
4. If there are already multiple cluster centers, P needs to be calculated sequentiallyi+1(Ri+1i+1) Distance to each cluster center and velocity biasΔ R, Δ V, if the distance and speed deviation of the point and all the cluster centers satisfy Δ R>Rth,△V>VthIf so, establishing a new cluster by taking the point as a cluster center; otherwise, the point and the nearest cluster center are considered to belong to the same cluster.
5. And repeating the steps until all the radar data points of the same frame are processed.
Further, the image registration mentioned in the step 5) includes three steps, key point detection and feature description, feature matching, and image transformation.
Further, the visual data mentioned in the step 6) is a night image shot by the vehicle-mounted camera, and a daytime view generated by the antagonistic neural network generator.
Further, the target matching mentioned in the step 7) includes calculating target similarity, target matching of different sensors, and target matching of the same sensor and historical targets.
Further, the data fusion mentioned in step 7) is a linear combination method based on weight coefficients, and since the observation of the target by the millimeter wave radar and the camera is relatively independent, covariance matrices of different sensors are respectively selected to perform weighting processing on the target parameters, and the calculation method is as follows:
Xij=Pj(Pi+Pj)-1Xi+Pi(Pj+Pi)-1Xj
P=Pi(Pi+Pj)-1Pj
wherein, XiRepresenting a relevant parameter, P, of the ith millimeter wave radar target obtained through state estimationiRepresenting a corresponding radar covariance matrix; xjRepresenting the relevant parameter, P, of the j-th camera target obtained by state estimationjRepresenting a respective camera covariance matrix; xijAnd representing the target parameters obtained through the fusion processing.
The above embodiments are merely illustrative of the technical ideas of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like based on the technical ideas of the present invention should be included in the scope of the present invention.

Claims (10)

1. A detection method based on the fusion of an antagonistic neural network and a plurality of sensors is characterized in that a millimeter wave radar and a visual sensor are adopted for fusion detection, good illuminance barrier-free road images of different road sections are simultaneously sampled to serve as a pre-sampling database, the mode of processing the images sampled by a night visual sensor is that the antagonistic neural network is adopted, after the network is trained, the night images are converted to serve as the input of the antagonistic neural network, daytime images under the same scene are output, the millimeter wave radar data processing mainly comprises data clustering and effective target primary screening, after the pre-processing data of the visual sensor and the millimeter wave radar sensor are obtained, the output daytime images of the antagonistic neural network are compared with the pre-sampling database of the current road section, the same scene view is screened out, and image registration is carried out, and then unifying time and space coordinate systems of the preprocessed millimeter wave radar data and the daytime image output by the countermeasure network, subtracting the registered same-scene pre-sampling data from the millimeter wave preprocessed data and the daytime image output by the countermeasure network to obtain final processed data of the millimeter wave preprocessed data and the daytime image output by the countermeasure network, and fusing the final processed data of the millimeter wave preprocessed data and the data of the same scene pre-sampling data to obtain a final nighttime obstacle detection result.
2. The method for detecting the night target based on the fusion of the antagonistic neural network and the multi-sensor as claimed in claim 1, wherein the method for detecting the night target based on the fusion of the antagonistic neural network and the multi-sensor comprises the following steps:
step 1: establishing an obstacle-free road pre-sampling database under a good illumination condition;
step 2: sampling night driving data of a millimeter wave radar and a vision sensor;
and step 3: training an antagonistic neural network and a target detection network by using the established database, so that the antagonistic neural network can generate and output daytime images in the same scene after inputting nighttime images, and the target detection network can identify and classify obstacles in the images;
and 4, step 4: clustering the millimeter wave radar sampling data;
and 5: comparing the generated daytime image with a pre-sampling database, screening out pre-sampling data in the same scene, and carrying out image registration;
step 6: performing space-time unification on millimeter wave radar data and visual data, subtracting the millimeter wave radar data and the visual data from the pre-sampling data after registration, and detecting the millimeter wave radar data and the visual data after subtraction;
and 7: and carrying out target matching and data fusion on the two detection data, and outputting a final night obstacle detection result.
3. The method for detecting based on the fusion of the antagonistic neural network and the multiple sensors as claimed in claim 1, wherein the step 2 specifically comprises:
step 21: collecting millimeter wave radar point cloud data of the unmanned automobile in the night driving process, wherein the millimeter wave radar point cloud data comprises point cloud distribution and the distance between the point cloud and the point cloud;
step 22: RGB image data of the unmanned automobile in the driving process at night are collected.
4. The method for detecting based on the fusion of the antagonistic neural network and the multiple sensors according to the claim 1, wherein the step 3 specifically comprises:
step 31: training an antagonistic neural network, wherein a day database and a night database under the same scene are required, but the acquisition of day images and night images under the same scene is basically impossible, so that a vehicle-mounted camera is firstly utilized to directly acquire day images, a day data set A is established, then a night same-scene data set B is obtained by adjusting the contrast and the brightness of images in the data set A, the day data set A is used as the real sample input of a discriminator in the training of an antagonistic neural network model, the night data set matched with the day data set is used as the input of a generator, the network is descended by small-batch random gradient, and the learning rate is set to 0.0002 to complete the training, wherein a small batch is preset to 128, and normal distribution with zero mean value and 0.02 standard deviation is adopted as the initialization method of each layer weight parameter;
step 32: the target detection network adopts a YOLOV3 network, because a database required by the YOLOV3 network needs to be labeled, a target detection data set C needs to be established and labeled, images in the data set are shot by a vehicle camera and some high-quality images searched by the network to expand the content of the database, then the VOC is used for manufacturing the data set, and the network is trained through the data set.
5. The method for detecting based on the fusion of the antagonistic neural network and the multiple sensors according to the claim 1, wherein the step 6 specifically comprises:
step 61: the space-time unification comprises space synchronization and time synchronization, wherein the space synchronization is to transfer and transform point cloud data collected by the millimeter wave radar from a radar coordinate system to a pixel coordinate system; the time synchronization is that a pulse generator is utilized, a trigger frequency is set according to the scanning frequency of the millimeter wave radar, millimeter wave radar and camera data of a current frame are acquired by triggering each time, and if no data exists in an image at the moment, interpolation calculation is carried out by utilizing data of front and rear moments;
step 62: and (3) acquiring millimeter wave radar data and visual data after space-time unification according to the step 61, then taking the pre-sampling data after registration acquired in the step 5 as background data, respectively subtracting the millimeter wave radar data and the visual data from the pre-sampling data, and detecting the millimeter wave radar data and the visual data by using a trained yolov3 neural network.
6. The method as claimed in claim 1, wherein the pre-sampling database mentioned in step 1 is a road picture taken by a vehicle-mounted camera during the running of a vehicle on a road under good weather conditions, the vehicle speed on an expressway, a first-class road, a second-class road, a third-class road and a fourth-class road is 100km/h, 80km/h, 70km/h, 60km/h and 40km/h respectively, the sampling frequency is 30FPS, and after the road data is sampled, the information of obstacles such as vehicles and pedestrians in the image is removed manually, and only the background information of the road is retained.
7. The detection method based on the fusion of the anti-neural network and the multi-sensor as claimed in claim 1, wherein the millimeter wave radar clustering strategy mentioned in the step 4 is as follows:
1. selecting any point in a frame of radar scanning data as an initial value of a clustering center;
2. calculating a certain data point P in the same frame of radar scanning datai+1(Ri+1i+1) And a cluster center Pi(Rii) Manhattan distance Δ R therebetweeniAnd speed deviation DeltaVi:△Ri=|Ri+1-Ri|,△Vi=|Vi+1-Vi|;
3. The distance DeltaR between two pointsiVelocity deviation Δ ViRespectively corresponding to a set threshold value Rth、VthComparing, if the two points are less than the threshold value, determining that the two points belong to the same cluster, otherwise, determining that the two points belong to different clusters, and calculating the average value of the two points according to the Pi+1(Ri+1i+1) Establishing a new cluster for the cluster center;
4. if there are already multiple cluster centers, P needs to be calculated sequentiallyi+1(Ri+1i+1) And the distance and speed deviation DeltaR and DeltaV of each cluster center, if the distance and speed deviation of the point and all cluster centers meet DeltaR>Rth,△V>VthIf so, establishing a new cluster by taking the point as a cluster center; otherwise, the point and the nearest clustering center belong to the same clustering cluster;
5. and repeating the steps until all the radar data points of the same frame are processed.
8. The method for detecting based on the antagonistic neural network and the multi-sensor fusion is characterized in that the image registration mentioned in the step 5 comprises three steps of key point detection and feature description, feature matching and image transformation.
9. The detection method based on the fusion of the antagonistic neural network and the multiple sensors as claimed in claim 1, wherein the visual data mentioned in the step 6 is night images taken by a vehicle-mounted camera and daytime views generated by the antagonistic neural network generator.
10. The method for detecting based on the antagonistic neural network and the multi-sensor fusion is characterized in that the target matching sum mentioned in the step 7 comprises calculating the similarity of targets, matching targets of different sensors, matching targets of same sensors and historical targets;
the data fusion mentioned in the step 7 is a linear combination method based on weight coefficients, and since the observation of the target by the millimeter wave radar and the camera is relatively independent, covariance matrices of different sensors are respectively selected to perform weighting processing on target parameters, and the calculation method is as follows:
Xij=Pj(Pi+Pj)-1Xi+Pi(Pj+Pi)-1Xj
P=Pi(Pi+Pj)-1Pj
wherein, XiRepresenting a relevant parameter, P, of the ith millimeter wave radar target obtained through state estimationiRepresenting a corresponding radar covariance matrix; xjRepresenting the relevant parameter, P, of the j-th camera target obtained by state estimationjRepresenting a respective camera covariance matrix; xijAnd representing the target parameters obtained through the fusion processing.
CN202111177982.8A 2021-10-09 2021-10-09 Target detection method based on antagonistic neural network and multi-sensor fusion Pending CN113822221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111177982.8A CN113822221A (en) 2021-10-09 2021-10-09 Target detection method based on antagonistic neural network and multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111177982.8A CN113822221A (en) 2021-10-09 2021-10-09 Target detection method based on antagonistic neural network and multi-sensor fusion

Publications (1)

Publication Number Publication Date
CN113822221A true CN113822221A (en) 2021-12-21

Family

ID=78920168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111177982.8A Pending CN113822221A (en) 2021-10-09 2021-10-09 Target detection method based on antagonistic neural network and multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN113822221A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115884006A (en) * 2023-02-23 2023-03-31 启实(烟台)数据技术有限公司 Campus security prevention and control system and method based on AIoT
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN116991298A (en) * 2023-09-27 2023-11-03 子亥科技(成都)有限公司 Virtual lens control method based on antagonistic neural network
CN117093872A (en) * 2023-10-19 2023-11-21 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model
CN117593620A (en) * 2024-01-19 2024-02-23 中汽研(天津)汽车工程研究院有限公司 Multi-target detection method and device based on fusion of camera and laser radar

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115884006A (en) * 2023-02-23 2023-03-31 启实(烟台)数据技术有限公司 Campus security prevention and control system and method based on AIoT
CN115884006B (en) * 2023-02-23 2023-06-09 启实(烟台)数据技术有限公司 Campus security prevention and control system and method based on AIoT
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN116991298A (en) * 2023-09-27 2023-11-03 子亥科技(成都)有限公司 Virtual lens control method based on antagonistic neural network
CN116991298B (en) * 2023-09-27 2023-11-28 子亥科技(成都)有限公司 Virtual lens control method based on antagonistic neural network
CN117093872A (en) * 2023-10-19 2023-11-21 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model
CN117093872B (en) * 2023-10-19 2024-01-02 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model
CN117593620A (en) * 2024-01-19 2024-02-23 中汽研(天津)汽车工程研究院有限公司 Multi-target detection method and device based on fusion of camera and laser radar

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN113822221A (en) Target detection method based on antagonistic neural network and multi-sensor fusion
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
CN108596081B (en) Vehicle and pedestrian detection method based on integration of radar and camera
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN110910378B (en) Bimodal image visibility detection method based on depth fusion network
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
WO2023155483A1 (en) Vehicle type identification method, device, and system
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
CN114818916B (en) Road target classification method based on millimeter wave radar multi-frame point cloud sequence
CN106570490A (en) Pedestrian real-time tracking method based on fast clustering
CN114114312A (en) Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar
CN111161160A (en) Method and device for detecting obstacle in foggy weather, electronic equipment and storage medium
CN114359876A (en) Vehicle target identification method and storage medium
CN113449650A (en) Lane line detection system and method
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115166717A (en) Lightweight target tracking method integrating millimeter wave radar and monocular camera
CN114120150B (en) Road target detection method based on unmanned aerial vehicle imaging technology
CN113313182B (en) Target identification method and terminal based on radar and video fusion
CN110472508A (en) Lane line distance measuring method based on deep learning and binocular vision
CN111898427A (en) Multispectral pedestrian detection method based on feature fusion deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination