CN115546519B - Matching method of image and millimeter wave radar target for extracting pseudo-image features - Google Patents

Matching method of image and millimeter wave radar target for extracting pseudo-image features Download PDF

Info

Publication number
CN115546519B
CN115546519B CN202211545815.9A CN202211545815A CN115546519B CN 115546519 B CN115546519 B CN 115546519B CN 202211545815 A CN202211545815 A CN 202211545815A CN 115546519 B CN115546519 B CN 115546519B
Authority
CN
China
Prior art keywords
image
target
matching
matrix
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211545815.9A
Other languages
Chinese (zh)
Other versions
CN115546519A (en
Inventor
杨超
刘国清
杨广
王启程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youjia Innovation Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202211545815.9A priority Critical patent/CN115546519B/en
Publication of CN115546519A publication Critical patent/CN115546519A/en
Application granted granted Critical
Publication of CN115546519B publication Critical patent/CN115546519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to a matching method of an image for extracting a pseudo-image characteristic and a millimeter wave radar target, which comprises the steps of projecting the millimeter wave radar target to the image based on the image under time synchronization and the millimeter wave radar target under the time; marking a target area of the image, corresponding the target area to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo image according to the information of the image target area and the millimeter wave radar target; carrying out supervision training according to the matching relation to obtain a first feature extraction model, extracting image features of the pseudo-image, and fusing the image features; extracting target point characteristics and radar point characteristics, carrying out characteristic reprocessing by adopting a second characteristic extraction model, carrying out supervised training on the second characteristic extraction model by combining a matching relation, obtaining a target characteristic extraction model, and outputting characteristics to be matched; and performing optimal matching on the characteristics to be matched by adopting a matching model to obtain a matching matrix and analyzing. The method and the device have the effect of improving the matching accuracy.

Description

Matching method of image and millimeter wave radar target for extracting pseudo-image features
Technical Field
The application relates to the technical field of automatic driving, in particular to a matching method for extracting images with pseudo-image features and a millimeter wave radar target.
Background
At present, one of the challenges faced by an autonomous vehicle is accurate target detection and tracking in a complex scene, and the current visual target detection and tracking algorithm has reached the upper performance limit, and compared with a visual sensor in the visual algorithm, the detection performance of a millimeter wave radar is less affected by extreme weather, and in addition, the millimeter wave radar can measure distance, can also measure velocity vectors by using the doppler effect of a moving object reflecting signals, and has penetrability, is less affected by shielding, and has better tracking conditions under the shielding condition.
However, when the millimeter wave radar recognizes a 3D target, the target is clustered into a target point, and the resolution is low, so that it is difficult to provide accurate contour and appearance information of the target. Meanwhile, visual target detection is susceptible to shielding and extreme weather, and the distance and the relative speed between the vehicle and a target cannot be measured.
In view of the above-mentioned related art, the inventor finds that the existing image target and millimeter wave radar target have a problem of low accuracy in matching.
Disclosure of Invention
In order to improve the matching accuracy of the image target and the millimeter wave radar target, the application provides a matching method for extracting the image with the pseudo-image characteristics and the millimeter wave radar target.
In a first aspect, the application provides a matching method for extracting an image with pseudo-image features and a millimeter wave radar target.
The application is realized by the following technical scheme:
a matching method of an image for extracting a pseudo-image feature and a millimeter wave radar target comprises the following steps,
projecting the millimeter wave radar target onto the image based on the image under time synchronization and the millimeter wave radar target under the time to obtain a radar point coordinate of the millimeter wave radar target on the image;
marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world with the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo image according to the information of the target area of the image and the millimeter wave radar target;
carrying out supervision training according to the matching relation to obtain a first feature extraction model, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain global features;
extracting target point characteristics of the target area of the image and radar point characteristics of the millimeter wave radar target in the global characteristics;
based on the target point features and the radar point features, a second feature extraction model is adopted for feature reprocessing, the second feature extraction model is supervised and trained in combination with the matching relation, and features to be matched are output after a target feature extraction model is obtained;
performing optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
The present application may be further configured in a preferred example to: the method also comprises the following steps of,
the features to be matched comprise image target point features and radar point features, and the image target point features and the radar point features are subjected to inner product calculation to obtain a score matrix;
carrying out optimal matching on the scoring matrix through an optimized matching layer by adopting a matching model to obtain a target expansion matrix;
and obtaining a matching matrix according to the target expansion matrix, initializing regression parameters, putting the regression parameters into an optimized matching layer for iteration, and obtaining the regression parameters through supervised training.
The application may be further configured in a preferred example to: the step of adopting a matching model to optimally match the scoring matrix through an optimized matching layer to obtain a target expansion matrix comprises the following steps of,
initializing a target constant of the sum of each row and the sum of each column after the optimal matching;
respectively adding a row and a column to the last row and the last column of the scoring matrix, and filling regression parameters in the added row and column to obtain a first expansion matrix;
starting first iteration, calculating the sum of each row of the first expansion matrix, and correspondingly obtaining a first constant;
dividing each row of the first expansion matrix by a first constant, and multiplying by a target constant to obtain a second expansion matrix;
calculating the sum of each row of the second expansion matrix, and correspondingly obtaining a second constant;
dividing each column of the second expanded matrix by a second constant, multiplying the second constant by a target constant to obtain a third expanded matrix, and ending the first iteration;
and continuously iterating each row and each column of the third expansion matrix according to the first iteration mode until the preset times are reached, and obtaining the target expansion matrix.
The present application may be further configured in a preferred example to: also comprises the following steps
Solving the row number i of the maximum fraction value of each column of the matching matrix, and judging whether the maximum fraction value is greater than a preset threshold value;
if the maximum score value of the matching matrix is larger than a preset threshold value, matching the image target located in the current column number with the ith radar point of the millimeter wave radar target located in the line number i;
and if the maximum fraction value of the matching matrix is smaller than a preset threshold value, a certain image target corresponding to the row does not have a radar point matched with the image target.
The present application may be further configured in a preferred example to: the step of constructing a pseudo image based on the target region and the information of the millimeter wave radar target includes,
selecting an RGB color image at any moment, marking the target center position, the width, the height and the target category of an image target in the RGB color image, and recording camera parameters at the moment;
selecting radar target information output by a millimeter wave radar at the same moment, wherein the radar target information comprises a radar target Id, a target point probability, a first position of the radar target in a radar coordinate system and a speed of the radar target in the radar coordinate system;
converting the first position of the radar target through the camera parameters by using a coordinate system to obtain a second position of the radar target on an image;
creating a first matrix of 7 channels with the same size as the RGB color image, and filling a radar target Id, a target point possibility probability, a first position of a radar target in a radar coordinate system, a speed of the radar target in the radar coordinate system and characteristic values of a radar target heat map of the radar target into 7 channels of the first matrix at a second position of the radar target on an image;
creating a second matrix of 4 channels of the same size as the RGB color image, and filling the width, height, object class of image objects and feature values of image object heat map into the 4 channels of the second matrix at the center of the image objects;
and splicing the 3-channel matrix of the RGB color image, the 7-channel matrix of the first matrix and the 4-channel matrix of the second matrix to form a 14-channel pseudo image.
The present application may be further configured in a preferred example to: the first feature extraction model is any one of ResNet50/34/18 or VGG 16/19.
The application may be further configured in a preferred example to: the second feature extraction model is a SuperGlue model.
The present application may be further configured in a preferred example to: the matching model is a sinkhorn model.
In a second aspect, the present application provides a matching device for extracting an image of a pseudo-image feature and a millimeter wave radar target.
The application is realized by the following technical scheme:
a matching device for extracting an image of a pseudo-image feature and a millimeter wave radar target includes,
the mapping module is used for projecting the millimeter wave radar target onto the image based on the image under time synchronization and the millimeter wave radar target under the time to obtain the radar point coordinates of the millimeter wave radar target on the image;
the pseudo-image module is used for marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo-image according to the target area of the image and the information of the millimeter wave radar target;
the first feature extraction module is used for obtaining a first feature extraction model according to the matching relation supervision training, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain global features;
the second feature extraction module is used for extracting target point features of the target area of the image and radar point features of the millimeter wave radar target in the global features; based on the target point features and the radar point features, a second feature extraction model is adopted for feature reprocessing, the second feature extraction model is supervised and trained in combination with the matching relation, and features to be matched are output after a target feature extraction model is obtained;
the optimization matching module is used for carrying out optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and the matching matrix analysis module is used for analyzing a matching result of the image and the millimeter wave radar target based on the matching matrix.
In a third aspect, the present application provides a computer device.
The application is realized by the following technical scheme:
a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the above-described matching methods of an image extracted pseudo-image feature and a millimeter wave radar target when executing the computer program.
In summary, compared with the prior art, the beneficial effects brought by the technical scheme provided by the application at least include:
projecting the millimeter wave radar target onto the image based on the image target under time synchronization and the millimeter wave radar target under the time to obtain radar point coordinates of the millimeter wave radar target on the image so as to prepare data for obtaining a matching relation; marking a target area in the image, enabling the target area of the same target object belonging to the real world to correspond to the radar target one by one to obtain a matching relation, using the matching relation as a label for supervision training to train a model, constructing a pseudo image according to the information of the target area and the millimeter wave radar target, and providing global information in the image range for extracting features; the method comprises the steps of training a first feature extraction model according to matching relation supervision, extracting an image feature pyramid of a pseudo image based on the first feature extraction model, and fusing features under different scales to obtain global features under different receptive fields, so that the extracted features are more comprehensive; extracting target point characteristics of a target area and radar point characteristics of a millimeter wave radar target in global characteristics under different receptive fields, carrying out characteristic reprocessing by adopting a second characteristic extraction model based on the target point characteristics and the radar point characteristics, carrying out supervision training on the second characteristic extraction model by combining a matching relation, outputting characteristics to be matched after obtaining a target characteristic extraction model, further extracting similar characteristics and difference characteristics of points and points, and enabling the extracted characteristics to be more detailed; the optimal matching of the features to be matched is carried out by adopting the matching model, the regression parameters are regressed by combining with the supervised training of the matching relationship, the matching result of the image and the millimeter wave radar target is obtained, the more comprehensive and more detailed features of the image target and the millimeter wave radar target are matched by optimizing the matching model of the matching layer, the matching accuracy of the image target and the millimeter wave radar target is improved, the problems that the resolution is low when the millimeter wave radar identifies the 3D target and the accurate appearance and contour information of the target are difficult to provide are solved, meanwhile, the image target and the millimeter wave radar target are matched and fused, short plates in respective perception can be effectively overcome, and the accuracy of perception is improved.
Drawings
Fig. 1 is a schematic main flow diagram of a matching method for an image and a millimeter wave radar target for extracting pseudo-image features according to an exemplary embodiment of the present application.
Fig. 2 is a flowchart of a method for matching an image with a millimeter wave radar target to extract features of a pseudo-image, according to another exemplary embodiment of the present application, to construct a pseudo-image.
Fig. 3 is a flowchart of a matching method for extracting an image of a pseudo-image feature and a millimeter wave radar target according to another exemplary embodiment of the present application, where the matching method employs a matching model to perform optimal matching.
Fig. 4 is a schematic diagram illustrating an effect of a matching method for extracting an image of a pseudo-image feature and a millimeter wave radar target according to an exemplary embodiment of the present application.
Fig. 5 is a block diagram of a matching apparatus for extracting an image of a pseudo-image feature and a millimeter wave radar target according to an exemplary embodiment of the present application.
Detailed Description
The specific embodiments are only for explaining the present application and are not limiting to the present application, and those skilled in the art can make modifications to the embodiments without inventive contribution as required after reading the present specification, but all the embodiments are protected by patent law within the scope of the claims of the present application.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
With reference to fig. 1, an embodiment of the present application provides a matching method for extracting an image of a pseudo-image feature and a millimeter wave radar target, and main steps of the method are described as follows.
S1, projecting a millimeter wave radar target onto an image based on the image under time synchronization and the millimeter wave radar target under the time to obtain a radar point coordinate of the millimeter wave radar target on the image;
s2, marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo image according to the target area of the image and the information of the millimeter wave radar target;
s3, carrying out supervision training according to the matching relation to obtain a first feature extraction model, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain global features;
s4, extracting target point characteristics of the target area of the image and radar point characteristics of the millimeter wave radar target in the global characteristics;
s5, based on the target point characteristics and the radar point characteristics, carrying out characteristic reprocessing by adopting a second characteristic extraction model, and carrying out supervised training on the second characteristic extraction model by combining the matching relation, so as to obtain a target characteristic extraction model and output characteristics to be matched;
s6, carrying out optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and S7, analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
Specifically, an image under time synchronization and a millimeter wave radar target at that time, for example, 30 images for 1 second, are perceived and acquired through two types of sensors installed at different positions, and the radar point target is projected onto the image through coordinate system conversion to obtain coordinates of the radar point on the image.
In the embodiment, the target area is a key target area, the key target area comprises objects with moving attributes such as pedestrians, vehicles and animals, the key target area of the same target object belonging to the real world corresponds to the millimeter wave radar target one by one to obtain a matching relation serving as a label supervised training ResNet or VGG and SuperGlue matching algorithm, and a pseudo image is constructed based on a rectangular frame of the key target area and information of the millimeter wave radar target.
And carrying out supervised training according to the matching relationship to obtain a first feature extraction model, extracting the image features of the pseudo image based on the first feature extraction model to form a feature pyramid, namely an image feature pyramid, wherein the image feature pyramid is obtained by sampling different step lengths of a convolutional neural network, is the image features of different scales, and is used for fusing the image features. In this embodiment, the first feature extraction model may be a deep learning CNN model.
In one embodiment, the first feature extraction model may be any one of ResNet50/34/18 or VGG 16/19. In this embodiment, by taking ResNet50 as an example, 4 layers of features of a feature pyramid are fused to obtain a global feature map featuremap under different receptive fields, the dimensions of the feature map featuremap are [1, C, H, W ], where H and W are respectively the height and width of an image, and C is the length of a feature dimension, and a feature of a center point of a rectangular frame of a key target region in the global feature map featuremap and a feature of a radar point of a millimeter wave radar target are extracted to obtain C/2-dimensional features of m radar points and n points of the image target, that is, the dimension of each point is [1, C/2, 1].
And based on the C/2 dimensional features of the central point and the radar points, carrying out feature reprocessing by adopting a second feature extraction model, carrying out supervised training on the second feature extraction model by combining a matching relation, obtaining a target feature extraction model, and outputting D-dimensional features to be matched.
In an embodiment, the second feature extraction model may be a superslute model. And re-extracting and re-processing the characteristics of the center point and the radar point through the SuperGlue model, performing supervision training on the SuperGlue model by combining the one-to-one corresponding matching relation between the image target area and the radar target, and calculating the negative log likelihood loss until the SuperGlue model converges to complete the training. And taking the trained SuperGlue model as a target feature extraction model, and outputting the features to be matched after obtaining the target feature extraction model.
And performing optimal matching on the characteristics to be matched by adopting a matching model to obtain a matching matrix.
In one embodiment, the matching model may be a sinkhorn model. And performing inner product calculation on the radar point characteristics and the image target point characteristics through a sinkhorn model to obtain a scoring matrix, and performing iteration to obtain a final matching matrix.
Specifically, let D-dimensional features of m radar points be { D 1 D ,...,d m D Is at { p } position 1 ,...,p m }; the D-dimensional features of the n image target points are { D } 1 D ,...,d n D Is at { p } position 1 ,...,p n }. After the characteristic reprocessing of SuperGlue, the characteristic f of the ith radar point is obtained i D The feature of the j-th image target point is f j D Will be characterized by f i D And f j D Calculating inner product to obtain fraction S i, j Score S i, j Is the ith row and jth column element in the scoring matrix S with m rows and n columns.
And (3) performing sinkhorn optimal matching on the m multiplied by n scoring matrix S through an optimal matching layer to obtain a final matching matrix.
And analyzing the iterated matching matrix to obtain a matching result of the image target and the millimeter wave radar target.
Referring to fig. 2, in one embodiment, S2, the constructing a pseudo image based on the target region and the information of the millimeter wave radar target includes,
s21, selecting an RGB color image at any moment, marking the target center position, the width, the height and the target category of an image target in the RGB color image, and recording the camera parameters at the moment;
s22, radar target information output by the millimeter wave radar at the same moment is selected, wherein the radar target information comprises a radar target Id, a target point possibility probability, a first position (Rx, ry) of the radar target under a radar coordinate system and a speed (Vx, vy) of the radar target under the radar coordinate system;
s23, converting the first position (Rx, ry) of the radar target through the camera parameters by using a coordinate system to obtain a second position of the radar target on the image;
s24, creating a 7-channel first matrix with the same element values as the RGB color image, wherein the element values of the matrix are all zero, and filling the radar target Id, the target point possibility probability, the first position (Rx, ry) of the radar target in a radar coordinate system, the speed (Vx, vy) of the radar target in the radar coordinate system and the characteristic value 1 of the radar target heat map into 7 channels of the first matrix, wherein the 7 channels are positioned at the second position of the radar target in the image;
s25, creating a 4-channel second matrix with the same size as the RGB color image and all zero element values, and filling the width, the height and the target category of the image target and the characteristic value 1 of the image target heat map into 4 channels of the second matrix at the central position of the image target;
and S26, splicing the 3-channel matrix of the RGB color image, the 7-channel matrix of the first matrix and the 4-channel matrix of the second matrix to form a 14-channel pseudo image, namely a pseudo image matrix, so that the characteristic information is richer. And dividing the C-dimensional features of the pseudo-image into two parts, dividing the C/2-dimensional features into image targets, and dividing the rest C/2-dimensional features into millimeter wave radar targets.
Referring to fig. 3, in one embodiment, further comprising the steps of,
s61, the features to be matched comprise target point features and radar point features, and inner product calculation is carried out on the target point features and the radar point features to obtain a score matrix;
performing optimal matching on the scoring matrix through an optimized matching layer by adopting a matching model to obtain a target expansion matrix;
and obtaining a matching matrix according to the target expansion matrix, initializing regression parameters, putting the regression parameters into an optimized matching layer for iteration, and obtaining the regression parameters through supervised training.
In one embodiment, the step of using the matching model to optimally match the scoring matrix through an optimized matching layer to obtain the target expansion matrix includes,
s62, initializing the sum of each row and the target constant of the sum of each column after the optimal matching;
s63, respectively adding a row and a column to the last row and the last column of the scoring matrix, and filling regression parameters in the added row and column to obtain a first expansion matrix;
s64, starting first iteration, calculating the sum of each row of the first expansion matrix, and correspondingly obtaining a first constant;
s65, dividing each row of the first expanded matrix by a first constant and multiplying the first constant by a target constant to obtain a second expanded matrix;
s66, calculating the sum of each row of the second expansion matrix, and correspondingly obtaining a second constant;
s67, dividing each row of the second expanded matrix by a second constant, multiplying the second expanded matrix by a target constant to obtain a third expanded matrix, and ending the first iteration;
and S68, continuously iterating each row and each column of the third expansion matrix according to the first iteration mode until the preset times are reached, and obtaining the target expansion matrix.
S69, judging whether the target expansion matrix meets a preset condition or not;
s70, if the target expansion matrix meets the preset condition, obtaining a matching matrix according to a judgment result;
specifically, m radar points and point features of n image targets are subjected to inner product calculation to form an m × n matrix, namely a partial matrix S, the sum of each row and the target constant of the sum of each column after optimal matching are initialized, and the default is set to 1, such as a and b.
And expanding the scoring matrix, so that 1 row and 1 column are added to the feature matrix to be matched, and coding to generate an (m + 1) × (n + 1) expansion matrix S', namely a first expansion matrix, wherein the expansion matrix is matched with the radar points by adding 1 row or 1 column because some image target points have no matching relationship with the radar points. That is, when there is no radar point matching with an image target, the image target should match with the radar target of the additional 1 line.
The following processing is then performed on the extended matrix S', starting the first iteration:
s1', calculating the sum of each row of the expansion matrix S';
s2', dividing each row of the extended matrix S' by the sum of each row of the extended matrix S ', and multiplying the sum by a target row sum a, namely a first constant, to obtain a new extended matrix S', namely a second extended matrix;
s3', calculating the sum of the rows of the second expansion matrix S';
s4' dividing each row of the second expansion matrix S ' by the sum of the rows of the expansion matrix S ', multiplying the sum b of the target rows by a second constant to obtain a new expansion matrix, i.e. a third expansion matrix, and ending the first iteration;
and S5' repeatedly executing the steps S1' -S4 ' on each row and each column of the third expansion matrix according to a first iteration mode, wherein after k iterations, the sum of the rows of the new expansion matrix is consistent with the sum a of the target rows, the sum of the columns of the new expansion matrix is consistent with the sum b of the target columns, and the new expansion matrix is the target expansion matrix. In the present embodiment, a =1, b =1, and k =100.
And judging whether the obtained target expansion matrix meets a preset condition, namely whether the target expansion matrix is iterated for a preset number of times k.
And if the target expansion matrix meets the preset condition, namely the target expansion matrix iterates for a preset number k, obtaining a matching matrix according to the target expansion matrix. And obtaining a matching result according to the matching matrix.
The matching matrix is the first n column parts of the first m rows of the target expansion matrix. If the size of the target expansion matrix is (m + 1) × (n + 1), the matching matrix is the first n columns of the first m rows of the target expansion matrix and has the size of m × n.
Further, solving the row number i of the maximum score value of each column of the matching matrix, and judging whether the maximum score value is greater than a preset threshold value;
if the maximum score value of the matching matrix is larger than a preset threshold value, matching the image target located in the current column number with the ith radar target of the millimeter wave radar target located in the row number i;
and if the maximum fraction value of the matching matrix is smaller than a preset threshold value, a certain image target corresponding to the row does not have a radar target matched with the image target.
Solving the row number i of the maximum fraction value of each column of the matching matrix, and judging whether the maximum fraction value is greater than a preset threshold value;
if the maximum score value of the matching matrix is larger than a preset threshold value, matching the image target located in the column number with the ith radar target of the millimeter wave radar target located in the row number i;
and if the maximum fraction value of the matching matrix is smaller than a preset threshold value, a certain image target corresponding to the row does not have a radar target matched with the image target.
Specifically, a target expansion matrix with the size of (m + 1) × (n + 1) is obtained after the sinkhorn algorithm optimization of the optimized matching layer.
And extracting the front n columns of the front m rows of the target expansion matrix to obtain a matching matrix with the size of m multiplied by n.
Solving the ith row where the maximum score value is located for each column of the matching matrix, and matching the image target with the radar target of the ith row when the maximum score value is greater than a certain preset threshold value; otherwise, when the maximum score value of a certain column of the matrix is smaller than a certain preset threshold value, a certain image target corresponding to the column does not have a radar target matched with the certain image target.
The preset threshold ranges from 0 to 1, and in this embodiment, the preset threshold may be 0.2. For example, if the maximum score value of 0.8 in the 1 st column of the target expansion matrix is in the 10 th row and is greater than the threshold value of 0.2, the image target corresponding to the 1 st column matches the 10 th radar target. For another example, the maximum score value of a certain j column of the matrix is in the ith row and is less than the threshold value 0.2, and the corresponding image target of the j column has no radar target matched with the image target.
Fig. 4 is a schematic diagram illustrating the effect of a matching method for extracting an image of a pseudo-image feature and a millimeter wave radar target.
In summary, in the matching method for extracting the image with the pseudo-image features and the millimeter wave radar target, information such as the position and the speed of the millimeter wave radar target is obtained and is input into the initialized pseudo-image matrix in combination with the RGB image information; a pseudo-map based on 14 channels containing image target and radar point information; carrying out manual marking on the target of the image target and the radar target to obtain a matching relation, and using the matching relation as a label for supervision training to train the model; extracting image features of the pseudo image by using a deep learning CNN model to form a feature pyramid, and fusing the features of the feature pyramid to extract image features under different receptive fields, so that the extracted features are more comprehensive; dividing the C-dimensional image features of the pseudo image into two parts, and acquiring C/2-dimensional point features at the positions corresponding to the image target center point and the radar target point; processing and extracting two kinds of point characteristics obtained by division by using a SuperGlue graph neural network model so as to further extract similar characteristics and difference characteristics of points and points, so that the extracted point characteristics are more detailed; performing optimization matching by using a sink horn matching algorithm to obtain a target expansion matrix after iteration is completed, further obtaining a matching matrix, solving the ith row where the maximum score value is located for each column of the matching matrix, and matching the image target with the radar target of the ith row when the maximum score value is greater than a certain set threshold value; otherwise, when the maximum score value of a certain row of the matrix is smaller than a certain set threshold value, the corresponding certain image target of the row does not have a radar target matched with the certain image target; and finally, a matching result of the image target and the radar target is obtained, so that more comprehensive and more detailed characteristics of the image target and the millimeter wave radar target are matched through optimizing a matching model of a matching layer, the matching accuracy of the image target and the millimeter wave radar target is improved, the problems that the millimeter wave radar has lower resolution ratio when identifying the 3D target and is difficult to provide accurate appearance and outline information of the target are solved, meanwhile, the image target and the millimeter wave radar target are matched and fused, short boards in perception can be effectively overcome, and the perception accuracy is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 5, an embodiment of the present application further provides a matching device for extracting an image with a pseudo-image feature and a millimeter wave radar target, where the matching device for extracting an image with a pseudo-image feature and a millimeter wave radar target is in one-to-one correspondence with one of the matching methods for extracting an image with a pseudo-image feature and a millimeter wave radar target in the foregoing embodiments. The matching device for extracting the image of the pseudo-image characteristic and the millimeter wave radar target comprises,
the mapping module is used for projecting the millimeter wave radar target onto the image based on the image under time synchronization and the millimeter wave radar target under the time to obtain the radar point coordinates of the millimeter wave radar target on the image;
the pseudo-image module is used for marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo-image according to the target area of the image and the information of the millimeter wave radar target;
the first feature extraction module is used for obtaining a first feature extraction model according to the matching relation supervision training, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain global features;
the second feature extraction module is used for extracting target point features of the target area of the image and radar point features of the millimeter wave radar target from the global features; based on the target point features and the radar point features, a second feature extraction model is adopted for feature reprocessing, the second feature extraction model is supervised and trained in combination with the matching relation, and features to be matched are output after a target feature extraction model is obtained;
the optimization matching module is used for carrying out optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and the matching matrix analysis module is used for analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
For specific limitation of the matching device for extracting the image with the pseudo-image feature and the millimeter wave radar target, reference may be made to the above limitation on the matching method for extracting the image with the pseudo-image feature and the millimeter wave radar target, and details are not repeated here. The modules in the matching device for extracting the image with the pseudo-image characteristics and the millimeter wave radar target can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement any one of the above-described matching methods of extracting an image of a pseudo-image feature and a millimeter wave radar target.
In one embodiment, a computer-readable storage medium is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1, projecting a millimeter wave radar target onto an image based on the image under time synchronization and the millimeter wave radar target under the time to obtain a radar point coordinate of the millimeter wave radar target on the image;
s2, marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo image according to the target area of the image and the information of the millimeter wave radar target;
s3, carrying out supervision training according to the matching relation to obtain a first feature extraction model, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain global features;
s4, extracting target point characteristics of the target area of the image and radar point characteristics of the millimeter wave radar target in the global characteristics;
s5, based on the target point characteristics and the radar point characteristics, carrying out characteristic reprocessing by adopting a second characteristic extraction model, and carrying out supervised training on the second characteristic extraction model by combining the matching relation, so as to obtain a target characteristic extraction model and output characteristics to be matched;
s6, carrying out optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and S7, analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions.

Claims (10)

1. A matching method of an image for extracting a pseudo-image feature and a millimeter wave radar target is characterized by comprising the following steps,
projecting the millimeter wave radar target onto the image based on the image under time synchronization and the millimeter wave radar target under the time to obtain a radar point coordinate of the millimeter wave radar target on the image;
marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world with the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo image according to the information of the target area of the image and the millimeter wave radar target;
performing supervision training according to the matching relationship to obtain a first feature extraction model, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain a global feature;
extracting target point characteristics of the target area of the image and radar point characteristics of the millimeter wave radar target in the global characteristics;
based on the target point features and the radar point features, a second feature extraction model is adopted for feature reprocessing, the second feature extraction model is supervised and trained in combination with the matching relation, and features to be matched are output after a target feature extraction model is obtained;
performing optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
2. The matching method of an image extracted with pseudo-image features and a millimeter wave radar target according to claim 1, further comprising the step of,
the features to be matched comprise image target point features and radar point features, and inner product calculation is carried out on the image target point features and the radar point features to obtain a scoring matrix;
carrying out optimal matching on the scoring matrix through an optimized matching layer by adopting a matching model to obtain a target expansion matrix;
and obtaining a matching matrix according to the target expansion matrix, initializing regression parameters, putting the regression parameters into an optimized matching layer for iteration, and updating the regression parameters through supervised training.
3. The method for matching an image with a millimeter wave radar target according to claim 2, wherein the step of obtaining a target expansion matrix by using a matching model and performing optimal matching on the score matrix through an optimal matching layer comprises,
initializing the sum of each row and the sum of each column after the optimal matching;
respectively adding a row and a column to the last row and the last column of the scoring matrix, and filling regression parameters in the added row and column to obtain a first expansion matrix;
starting first iteration, calculating the sum of each row of the first expansion matrix, and correspondingly obtaining a first constant;
dividing each row of the first expansion matrix by a first constant, and multiplying by a target constant to obtain a second expansion matrix;
calculating the sum of each row of the second expansion matrix, and correspondingly obtaining a second constant;
dividing each column of the second expanded matrix by a second constant, multiplying the second constant by a target constant to obtain a third expanded matrix, and ending the first iteration;
and continuously iterating each row and each column of the third expansion matrix according to the first iteration mode until the preset times are reached, and obtaining the target expansion matrix.
4. The matching method of an image extracted with pseudo-image features and a millimeter wave radar target according to claim 3, further comprising the step of,
solving the row number i of the maximum fraction value of each column of the matching matrix, and judging whether the maximum fraction value is greater than a preset threshold value;
if the maximum score value of the matching matrix is larger than a preset threshold value, matching the image target located in the current column number with the ith radar point of the millimeter wave radar target located in the line number i;
and if the maximum fraction value of the matching matrix is smaller than a preset threshold value, a certain image target corresponding to the row does not have a radar point matched with the image target.
5. The matching method of an image extracted with pseudo-image features and a millimeter wave radar target according to any one of claims 1 to 4, wherein the step of constructing a pseudo-image based on the information of the target region and the millimeter wave radar target includes,
selecting an RGB color image at any moment, marking the target center position, the width, the height and the target category of an image target in the RGB color image, and recording camera parameters at the moment;
selecting radar target information output by a millimeter wave radar at the same moment, wherein the radar target information comprises a radar target Id, a target point probability, a first position of the radar target in a radar coordinate system and a speed of the radar target in the radar coordinate system;
converting the first position of the radar target through the camera parameters by using a coordinate system to obtain a second position of the radar target on an image;
creating a first matrix of 7 channels with the same size as the RGB color image, and filling a radar target Id, a target point possibility probability, a first position of a radar target in a radar coordinate system, a speed of the radar target in the radar coordinate system and characteristic values of a radar target heat map of the radar target into 7 channels of the first matrix at a second position of the radar target on an image;
creating a second matrix of 4 channels of the same size as the RGB color image, and filling the width, height, object class of image objects and feature values of image object heat map into the 4 channels of the second matrix at the center of the image objects;
and splicing the 3-channel matrix of the RGB color image, the 7-channel matrix of the first matrix and the 4-channel matrix of the second matrix to form a 14-channel pseudo image.
6. The method for matching an image for extracting a pseudo-image feature with a millimeter wave radar target according to any one of claims 1 to 4, wherein the first feature extraction model is any one of ResNet50/34/18 or VGG 16/19.
7. The method for matching an image with a millimeter wave radar target for extracting pseudo-image features according to any one of claims 1 to 4, wherein the second feature extraction model is a SuperGlue model.
8. The method for matching an image with a millimeter wave radar target for extracting pseudo-image features according to any one of claims 1 to 4, wherein the matching model is a sinkhorn model.
9. An apparatus for matching an image from which a pseudo-image feature is extracted with a millimeter wave radar target, comprising,
the mapping module is used for projecting the millimeter wave radar target onto the image based on the image under time synchronization and the millimeter wave radar target under the time to obtain the radar point coordinates of the millimeter wave radar target on the image;
the pseudo-image module is used for marking a target area of the image, corresponding the target area of the image of the same target object belonging to the real world to the millimeter wave radar target one by one to obtain a matching relation, and constructing a pseudo-image according to the target area of the image and the information of the millimeter wave radar target;
the first feature extraction module is used for obtaining a first feature extraction model according to the matching relation supervision training, extracting the image features of the pseudo image based on the first feature extraction model, and fusing the image features to obtain a global feature;
the second feature extraction module is used for extracting target point features of the target area of the image and radar point features of the millimeter wave radar target from the global features; based on the target point features and the radar point features, a second feature extraction model is adopted for feature reprocessing, the second feature extraction model is supervised and trained in combination with the matching relation, and features to be matched are output after a target feature extraction model is obtained;
the optimization matching module is used for carrying out optimal matching on the features to be matched by adopting a matching model to obtain a matching matrix;
and the matching matrix analysis module is used for analyzing the matching result of the image and the millimeter wave radar target based on the matching matrix.
10. A computer device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to perform the steps of the method of any one of claims 1 to 8.
CN202211545815.9A 2022-12-05 2022-12-05 Matching method of image and millimeter wave radar target for extracting pseudo-image features Active CN115546519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211545815.9A CN115546519B (en) 2022-12-05 2022-12-05 Matching method of image and millimeter wave radar target for extracting pseudo-image features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211545815.9A CN115546519B (en) 2022-12-05 2022-12-05 Matching method of image and millimeter wave radar target for extracting pseudo-image features

Publications (2)

Publication Number Publication Date
CN115546519A CN115546519A (en) 2022-12-30
CN115546519B true CN115546519B (en) 2023-03-24

Family

ID=84722304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211545815.9A Active CN115546519B (en) 2022-12-05 2022-12-05 Matching method of image and millimeter wave radar target for extracting pseudo-image features

Country Status (1)

Country Link
CN (1) CN115546519B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797665B (en) * 2023-02-02 2023-06-02 深圳佑驾创新科技有限公司 Image feature-based image and single-frame millimeter wave radar target matching method
CN115810115B (en) * 2023-02-08 2023-06-02 深圳佑驾创新科技有限公司 Fusion method of image and multi-frame millimeter wave radar target based on image characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006177858A (en) * 2004-12-24 2006-07-06 Mitsubishi Electric Corp Determination method of pseudo target by multipath of radar device, and radar monitoring device using determination method
CN113111974A (en) * 2021-05-10 2021-07-13 清华大学 Vision-laser radar fusion method and system based on depth canonical correlation analysis
CN113610044A (en) * 2021-08-19 2021-11-05 清华大学 4D millimeter wave three-dimensional target detection method and system based on self-attention mechanism
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115372958A (en) * 2022-08-17 2022-11-22 苏州广目汽车科技有限公司 Target detection and tracking method based on millimeter wave radar and monocular vision fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006177858A (en) * 2004-12-24 2006-07-06 Mitsubishi Electric Corp Determination method of pseudo target by multipath of radar device, and radar monitoring device using determination method
CN113111974A (en) * 2021-05-10 2021-07-13 清华大学 Vision-laser radar fusion method and system based on depth canonical correlation analysis
CN113610044A (en) * 2021-08-19 2021-11-05 清华大学 4D millimeter wave three-dimensional target detection method and system based on self-attention mechanism
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115372958A (en) * 2022-08-17 2022-11-22 苏州广目汽车科技有限公司 Target detection and tracking method based on millimeter wave radar and monocular vision fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光点云与图像信息融合的交通环境车辆检测;郑少武等;《仪器仪表学报》;20191215(第12期);第146-154页 *

Also Published As

Publication number Publication date
CN115546519A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN115546519B (en) Matching method of image and millimeter wave radar target for extracting pseudo-image features
US11398097B2 (en) Target detection method based on fusion of prior positioning of millimeter-wave radar and visual feature
CN110136154B (en) Remote sensing image semantic segmentation method based on full convolution network and morphological processing
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN111161349B (en) Object posture estimation method, device and equipment
CN113033520B (en) Tree nematode disease wood identification method and system based on deep learning
US20230076266A1 (en) Data processing system, object detection method, and apparatus thereof
US10943352B2 (en) Object shape regression using wasserstein distance
CN110942012A (en) Image feature extraction method, pedestrian re-identification method, device and computer equipment
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN112883991A (en) Object classification method, object classification circuit and motor vehicle
CN114998856B (en) 3D target detection method, device, equipment and medium for multi-camera image
CN112348116A (en) Target detection method and device using spatial context and computer equipment
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN116964588A (en) Target detection method, target detection model training method and device
CN115661767A (en) Image front vehicle target identification method based on convolutional neural network
CN112364974A (en) Improved YOLOv3 algorithm based on activation function
CN115761393A (en) Anchor-free target tracking method based on template online learning
CN115810115B (en) Fusion method of image and multi-frame millimeter wave radar target based on image characteristics
CN115797665B (en) Image feature-based image and single-frame millimeter wave radar target matching method
CN111696147A (en) Depth estimation method based on improved YOLOv3 model
CN115880654A (en) Vehicle lane change risk assessment method and device, computer equipment and storage medium
CN116310368A (en) Laser radar 3D target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address