CN113313041A - Front vehicle identification method and system based on information fusion - Google Patents
Front vehicle identification method and system based on information fusion Download PDFInfo
- Publication number
- CN113313041A CN113313041A CN202110635324.2A CN202110635324A CN113313041A CN 113313041 A CN113313041 A CN 113313041A CN 202110635324 A CN202110635324 A CN 202110635324A CN 113313041 A CN113313041 A CN 113313041A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- millimeter wave
- roi
- wave radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Electromagnetism (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method for identifying a front vehicle based on information fusion, which comprises the following steps: the millimeter wave radar inputs the detected front vehicle information into a trained BP neural network, and the BP neural network outputs the height of the vehicle in the image; converting vehicle coordinates detected by the millimeter wave radar into pixel coordinates, and forming a vehicle identification area based on the height of the vehicle in the image by taking the coordinates as a center; expanding a vehicle identification region to form an initial ROI (region of interest) of the millimeter wave radar, and extracting a roof fitting straight line in the initial ROI; taking the initial ROI area as a sliding window, and controlling the sliding window to slide leftwards and rightwards by a set step length to form a series of candidate ROI areas; and acquiring a candidate ROI (region of interest) with the center point closest to the middle point of the roof fitting straight line, taking the center of the candidate ROI as the center of the vehicle identification region, and reducing the sliding window to the size of the vehicle identification region. The millimeter wave radar information is accurately matched with the collected image information, and the fusion precision of multiple sensors is improved.
Description
Technical Field
The invention belongs to the technical field of multi-sensor fusion, and particularly relates to a front vehicle identification method and system based on information fusion.
Background
Along with economic growth and scientific and technological progress in recent years, automobile safety is more emphasized by more people, and the intelligent driving technology can improve driving safety and obtain wide attention. The intelligent driving system can be divided into three parts, namely a perception layer, a decision layer and an execution layer, wherein the perception technology is an important means for an intelligent automobile to obtain surrounding information, in a real environment, vehicle identification is one of the most common perception types, and the intelligent driving technology is the key for identifying a front vehicle accurately in real time.
In the intelligent driving system of volume production at present, the redundancy of perception can be realized to multisource sensor information fusion, and commonly used sensor is millimeter wave radar and camera, and the position information of the preceding vehicle that detects that the millimeter wave radar can be comparatively accurate, and the camera can obtain abundant environmental information, consequently carries out information fusion with two kinds of sensors to realize that different sensor data are complementary, thereby improve the discernment ability.
In the prior art, position information of a millimeter wave radar detection vehicle is projected to a camera image pixel point, and a region of the vehicle in an image is obtained according to the pixel point projection transformation, but in an actual working scene, a beam reflection target signal of the millimeter wave radar is not necessarily in the central position of the vehicle, and an automobile is influenced by road conditions and working conditions in a driving process, so that a radar target deviates.
Disclosure of Invention
The invention provides a method for identifying a front vehicle based on information fusion, aiming at improving the problems.
The invention is realized in such a way that a method for identifying a front vehicle based on information fusion specifically comprises the following steps:
s1, the millimeter wave radar inputs the detected front vehicle information into the trained BP neural network, and the BP neural network outputs the height of the vehicle in the image;
s2, converting the vehicle coordinates detected by the millimeter wave radar into pixel coordinates in a pixel coordinate system, and forming a vehicle identification area based on the height and width of the vehicle in the image by taking the pixel coordinates as the center;
s3, expanding the vehicle identification area to form an initial ROI of the millimeter wave radar, and extracting a roof fitting straight line in the initial ROI;
s4, taking the initial ROI as a sliding window, and controlling the sliding window to slide leftwards and rightwards by a set step length to form a series of candidate ROI areas;
and S5, acquiring a candidate ROI area with the center point closest to the middle point of the roof fitting straight line, taking the center of the candidate ROI area as the center of the vehicle identification area, and reducing the sliding window to the size of the vehicle identification area, namely realizing the positioning of the vehicle identification area in the image.
Further, the method for extracting the car roof fitting straight line specifically comprises the following steps:
s31, converting the image in the initial ROI into a gray image;
s32, detecting edge pixel points in the gray level image, and calling the edge pixel points as image edges;
s33, calculating a global threshold of the edge image, and converting the gray level image into a binary image containing a background and a foreground based on the global threshold;
and S34, performing straight line fitting on the binary image by adopting probability Hough transform to obtain a roof fitting straight line at the top of the vehicle.
Further, the method for acquiring the center of the vehicle identification area specifically comprises the following steps:
s41, determining coordinates U of two end points of the car roof fitting straight lineleft、UrightCalculating the coordinate U of the fitted straight middle point of the roofmid;
S42, calculating the center coordinates U of each ROI candidate regionQ;
S43, finding the center of the ROI candidate region with the smallest sym, which is the center of the vehicle identification region, where sym ═ UQ-Umid|。
Further, the vehicle information includes: distance and relative angle of the vehicle in front.
Further, the training method of the BP neural network specifically comprises the following steps:
s11, constructing a training sample and a testing sample: acquiring information of front vehicles through a millimeter wave radar, and acquiring vehicle heights of the front vehicles in the images at different distances and different relative angles;
s12, training the BP neural network based on the training samples, updating the weight parameters in the neural network until the prediction errors of all the test samples in the BP neural network are smaller than a set threshold value, and finishing the training of the BP neural network.
The invention also provides a front vehicle identification system based on information fusion, which comprises:
the camera is positioned above the millimeter wave radar, the axis of the radar axis and the axis of the camera optical axis are vertically positioned on a vertical plane which is vertical to a road surface, and the data processing unit is connected with the millimeter wave radar and the camera;
the data processing unit is integrated with a BP neural network, and the data processing unit positions the vehicle identification area into the image shot by the camera based on the information fusion-based front vehicle identification method.
Drawings
FIG. 1 is a flow chart of a method for identifying a preceding vehicle based on information fusion according to an embodiment of the present invention,
FIG. 2 is a schematic diagram of symmetry detection based on a sliding window according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a front vehicle identification system based on information fusion according to an embodiment of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be given in order to provide those skilled in the art with a more complete, accurate and thorough understanding of the inventive concept and technical solutions of the present invention.
The method and the device have the advantages that the vehicle area height in the images is predicted through the neural network, the symmetry of the vehicle areas in the images is detected through the sliding window, the center of the area with the highest symmetry is used as the center of the vehicle area in the images, and the vehicle areas are positioned in the images.
Fig. 1 is a flowchart of a method for identifying a preceding vehicle based on information fusion according to an embodiment of the present invention, where the method specifically includes:
step 1: and calibrating internal parameters of the camera, calibrating external parameters according to the position relation of the millimeter wave radar and the camera, carrying out combined calibration on the millimeter wave radar and the camera, and obtaining the conversion relation between a millimeter wave radar coordinate system and a pixel coordinate system according to the following formula.
Wherein (X)r,Yr,Zr) Is the coordinate of a space point in a millimeter wave radar coordinate system, (X)c,Yc,Zc) Is the coordinate in the space point camera coordinate system, (u.v) is the coordinate in the space point pixel coordinate system, R, T is the rotation matrix and the translation matrix from the millimeter wave radar coordinate system to the camera coordinate system, respectively, and is determined according to the position relationship between the millimeter wave radar and the camera, (u)0,v0) Pixel coordinate of optical center, fx、fyThe normalized focal length in the horizontal direction and the vertical direction of the camera is respectively, the unit is a pixel and can be determined according to the camera internal reference calibration result, and Zc is the height from the camera coordinate system to the ground.
Step 2: the method comprises the steps of constructing a BP neural network, adopting a three-layer BP reverse neural network, enabling an input layer to comprise two neurons and correspondingly input distance and relative angle information of a millimeter wave radar for identifying a front vehicle, enabling a hidden layer to comprise 8 neurons, enabling an output layer to comprise the neurons and correspondingly output size information of an image vehicle area.
The distance of the vehicle in front is the distance between the vehicle passing through the millimeter wave radar and the vehicle in front, and the relative angle is the relative angle between the vehicle passing through the millimeter wave radar and the vehicle in front, namely the detection angle of the millimeter wave radar. In the embodiment of the invention, the BP neural network prediction process comprises the following specific steps:
step 21: ginseng radix (Panax ginseng C.A. Meyer)Number initialization: the weight w is initialized with a gaussian distribution with a mean value of 0 and a variance of 0.01, the bias b is set to an initial value of 1, the initial learning rate η is given to be 0.1, and the ReLU activation function is given as follows:wherein x represents the output of the linear transformation;
step 22: collecting training samples and test samples: the distance and angle information of the front vehicle and the camera image at the current moment are identified by collecting n millimeter wave radars, and the distance x of the front vehicle is collected by the millimeter wave radars1To a relative angle x2Wherein X ═ X1,x2) The millimeter wave radar acquires the front vehicle information, wherein the front vehicle information comprises a short-distance vehicle, a middle-distance vehicle, a long-distance vehicle and a vehicle under different opposite angles, and the height y of the vehicle in the camera acquired image is correspondingly marked, wherein y is (y is)1,y2,y3,...,yn)。
Step 23: network training: inputting the collected millimeter wave radar data into a BP neural network, and obtaining the hidden layer output a by using the following formulajAnd output layer output
Wherein, wihAs a weight between the input layer and the hidden layer, the offset is bh;whiThe weight between the hidden layer and the output layer is offset by bjAnd i is the number of neurons in the input layer; h is the number of hidden layer neurons; j is the number of neurons in the output layer.
the error E is minimized by gradient descent as follows:
the weights are updated as follows:
judging whether the sample error E is smaller than an error threshold value err or not, and if all the sample errors E are smaller than the error threshold value err, ending the training of the BP neural network; otherwise, continuing the training process of the BP neural network and updating the parameters.
Step 24: BP neural network prediction: and (3) inputting the distance and the relative angle of the front vehicle detected by the millimeter wave radar, and obtaining an output value, namely the height of the front vehicle in the image, namely the height of the image vehicle region, through the BP neural network trained in the step (3).
And step 3: according to the road traffic related regulations, the width of a vehicle is about 1.3 times the height, so the height to width ratio of the vehicle identification area should be 1: and 1.3, determining the size of a rectangular vehicle identification area according to a proportional relation, converting vehicle coordinates of a front vehicle detected by a millimeter wave radar into pixel coordinates under a pixel coordinate system, preliminarily determining the position of the rectangular vehicle identification area in an image collected by a camera by taking the pixel coordinates as the central point of the rectangular vehicle identification area, assuming that the height of the vehicle in the image is H, the width of the vehicle is 1.3H, and the vehicle identification area is a rectangular area with the height of H and the width of 1.3H, wherein the center of the rectangular area is the pixel coordinates of a millimeter wave radar detection point in the pixel coordinate system. And expanding the height and the width of the vehicle identification area by k times outwards in the same proportion to obtain an initial ROI of the millimeter wave radar, wherein the setting value of k parameters is required to ensure that the initial ROI contains a front vehicle image.
And 4, step 4: and carrying out gray processing, Sobel edge detection, binary segmentation and probability Hough line fitting on the image in the initial ROI area, and fitting a roof line. The method for extracting the roof straight line specifically comprises the following steps:
step 41: and converting the image in the initial ROI area into a gray image.
Step 42: performing Sobel operator operation on the gray level image, wherein the Sobel operator is as follows:
respectively representing the vertical direction and the horizontal direction, and performing convolution operation on the vertical direction and the horizontal direction and the image to obtain gray gradient approximate values G in the two directionsX,GYThe pixel gradient magnitude is calculated by the following formula:the direction of the gradient is then calculated according to the following formula:
after the gradient magnitude G and the gradient direction theta are obtained, determining pixel points at the edge of the image according to a set threshold value.
Step 43: and solving the global threshold of the image after the edge detection by utilizing the maximum between-class variance of the image after the edge detection, and converting the gray level image into a binary image containing a background and a foreground.
Step 44: and fitting a top contour straight line of the vehicle in the binary image by adopting probabilistic Hough transform and setting an accumulation plane as 20, the length of the minimum straight line as 10 pixels and the maximum interval of the line segment as 2 pixels, wherein the top contour straight line is called a roof fitting straight line, and obtaining the coordinates of the fitting pixels on the roof fitting straight line.
And 5: taking the initial ROI region as a sliding window, the sliding window respectively moves m pixel points left and right along the horizontal direction, and the sliding step length of the sliding window is set to n pixel points, so as to generate (2 × m)/n +1 candidate ROI regions, as shown in fig. 2;
step 6: obtaining the coordinates U of the left end point of the fitted line of the car roofleftAnd right side endpoint coordinates UrightAs shown in fig. 2, the coordinates U of the middle point of the fitted line of the roof are calculatedmidComprises the following steps:
finally, the distance between the middle point of the roof fitting straight line and the center Q of each candidate ROI region can be obtained, and the distance is used to measure the symmetry sym of the vehicle position in the candidate ROI region, as follows:
sym=|UQ-Umid|
wherein, UQThe symmetry problem is expressed by the above formula and converted to the minimum value sym of solution sym for the coordinates of the center point Q of the ROI candidate regionminAnd the candidate ROI area at the minimum value comprises the optimal vehicle outline symmetry, and then the optimal rectangular vehicle identification area is created according to the height and the width of the vehicle identification area output by the BP neural network. The position realizes the optimal matching of the millimeter wave radar identification information and the image information.
Fig. 3 is a schematic structural diagram of a front vehicle identification system based on information fusion according to an embodiment of the present invention, which is only shown in relevant parts according to the embodiment of the present invention for convenience of description, and the system includes:
the system comprises a millimeter wave radar and a camera, wherein the millimeter wave radar is installed at a bumper in front of a vehicle, and the camera is positioned above the millimeter wave radar;
and the data processing unit is connected with the millimeter wave radar and the camera, a BP neural network is integrated on the data processing unit, and the data processing unit positions the vehicle identification area into the image shot by the camera based on the information fusion-based front vehicle identification method.
The beneficial effects of the invention are as follows:
the method has the advantages that the heights of vehicles with different angles and distances in predicted images are accurately predicted through a BP neural network, the straight line on the top of the vehicle can be well fitted in different scenes by using an image processing method for an ROI (region of interest) of the millimeter wave radar, a vehicle identification region with the best vehicle symmetry is obtained by using a sliding window technology, the identification error of the millimeter wave radar is reduced by fusing image information, the information of the millimeter wave radar is accurately matched with collected image information, the accurate positioning of the vehicle in the images is realized, and the fusion precision of multiple sensors is improved.
The invention has been described above with reference to the accompanying drawings, it is obvious that the invention is not limited to the specific implementation in the above-described manner, and it is within the scope of the invention to apply the inventive concept and solution to other applications without substantial modification.
Claims (6)
1. A method for identifying a front vehicle based on information fusion is characterized by specifically comprising the following steps:
s1, the millimeter wave radar inputs the detected front vehicle information into the trained BP neural network, and the BP neural network outputs the height of the vehicle in the image;
s2, converting the vehicle coordinates detected by the millimeter wave radar into pixel coordinates in a pixel coordinate system, and forming a vehicle identification area based on the height, namely the width, of the vehicle in the image by taking the pixel coordinates as the center;
s3, expanding the vehicle identification area to form an initial ROI of the millimeter wave radar, and extracting a roof fitting straight line in the initial ROI;
s4, taking the initial ROI as a sliding window, and controlling the sliding window to slide leftwards and rightwards by a set step length to form a series of candidate ROI areas;
and S5, acquiring a candidate ROI area with the center point closest to the middle point of the roof fitting straight line, taking the center of the candidate ROI area as the center of the vehicle identification area, and reducing the sliding window to the size of the vehicle identification area, namely realizing the positioning of the vehicle identification area in the image.
2. The method for recognizing the front vehicle based on the information fusion as claimed in claim 1, wherein the method for extracting the roof fitting straight line is as follows:
s31, converting the image in the initial ROI into a gray image;
s32, detecting edge pixel points in the gray level image, and calling the edge pixel points as image edges;
s33, calculating a global threshold of the edge image, and converting the gray level image into a binary image containing a background and a foreground based on the global threshold;
and S34, performing straight line fitting on the binary image by adopting probability Hough transform to obtain a roof fitting straight line at the top of the vehicle.
3. The method for recognizing the vehicle ahead based on the information fusion as claimed in claim 2, wherein the method for acquiring the center of the vehicle recognition area is as follows:
s41, determining coordinates U of two end points of the car roof fitting straight lineleft、UrightCalculating the coordinate U of the fitted straight middle point of the roofmid;
S42, calculating the center coordinates U of each ROI candidate regionQ;
S43, finding the center of the ROI candidate region with the smallest sym, which is the center of the vehicle identification region, where sym ═ UQ-Umid|。
4. The information fusion-based preceding vehicle identification method according to claim 1, wherein the vehicle information includes: distance and relative angle of the vehicle in front.
5. The method for recognizing the preceding vehicle based on the information fusion as claimed in claim 4, wherein the training method of the BP neural network is as follows:
s11, constructing a training sample and a testing sample: acquiring information of front vehicles through a millimeter wave radar, and acquiring vehicle heights of the front vehicles in the images at different distances and different relative angles;
s12, training the BP neural network based on the training samples, updating the weight parameters in the neural network until the prediction errors of all the test samples in the BP neural network are smaller than a set threshold value, and finishing the training of the BP neural network.
6. A preceding vehicle recognition system based on information fusion, the system comprising:
the camera is positioned above the millimeter wave radar and is connected with the data processing unit;
the data processing unit is integrated with a BP neural network, and the data processing unit positions the vehicle identification area into the image shot by the camera based on the information fusion-based front vehicle identification method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110635324.2A CN113313041B (en) | 2021-06-08 | 2021-06-08 | Information fusion-based front vehicle identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110635324.2A CN113313041B (en) | 2021-06-08 | 2021-06-08 | Information fusion-based front vehicle identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113313041A true CN113313041A (en) | 2021-08-27 |
CN113313041B CN113313041B (en) | 2022-11-15 |
Family
ID=77378026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110635324.2A Active CN113313041B (en) | 2021-06-08 | 2021-06-08 | Information fusion-based front vehicle identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313041B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114624683A (en) * | 2022-04-07 | 2022-06-14 | 苏州知至科技有限公司 | Calibration method for external rotating shaft of laser radar |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226833A (en) * | 2013-05-08 | 2013-07-31 | 清华大学 | Point cloud data partitioning method based on three-dimensional laser radar |
CN103324936A (en) * | 2013-05-24 | 2013-09-25 | 北京理工大学 | Vehicle lower boundary detection method based on multi-sensor fusion |
CN104392212A (en) * | 2014-11-14 | 2015-03-04 | 北京工业大学 | Method for detecting road information and identifying forward vehicles based on vision |
CN104637059A (en) * | 2015-02-09 | 2015-05-20 | 吉林大学 | Night preceding vehicle detection method based on millimeter-wave radar and machine vision |
CN105223583A (en) * | 2015-09-10 | 2016-01-06 | 清华大学 | A kind of target vehicle course angle computing method based on three-dimensional laser radar |
CN105574542A (en) * | 2015-12-15 | 2016-05-11 | 中国北方车辆研究所 | Multi-vision feature vehicle detection method based on multi-sensor fusion |
CN106951879A (en) * | 2017-03-29 | 2017-07-14 | 重庆大学 | Multiple features fusion vehicle checking method based on camera and millimetre-wave radar |
CN107609522A (en) * | 2017-09-19 | 2018-01-19 | 东华大学 | A kind of information fusion vehicle detecting system based on laser radar and machine vision |
CN108037505A (en) * | 2017-12-08 | 2018-05-15 | 吉林大学 | A kind of night front vehicles detection method and system |
CN108764108A (en) * | 2018-05-22 | 2018-11-06 | 湖北省专用汽车研究院 | A kind of Foregut fermenters method based on Bayesian inference |
KR102069843B1 (en) * | 2018-08-31 | 2020-01-23 | 서강대학교 산학협력단 | Apparatus amd method for tracking vehicle |
CN111368706A (en) * | 2020-03-02 | 2020-07-03 | 南京航空航天大学 | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision |
-
2021
- 2021-06-08 CN CN202110635324.2A patent/CN113313041B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226833A (en) * | 2013-05-08 | 2013-07-31 | 清华大学 | Point cloud data partitioning method based on three-dimensional laser radar |
CN103324936A (en) * | 2013-05-24 | 2013-09-25 | 北京理工大学 | Vehicle lower boundary detection method based on multi-sensor fusion |
CN104392212A (en) * | 2014-11-14 | 2015-03-04 | 北京工业大学 | Method for detecting road information and identifying forward vehicles based on vision |
CN104637059A (en) * | 2015-02-09 | 2015-05-20 | 吉林大学 | Night preceding vehicle detection method based on millimeter-wave radar and machine vision |
CN105223583A (en) * | 2015-09-10 | 2016-01-06 | 清华大学 | A kind of target vehicle course angle computing method based on three-dimensional laser radar |
CN105574542A (en) * | 2015-12-15 | 2016-05-11 | 中国北方车辆研究所 | Multi-vision feature vehicle detection method based on multi-sensor fusion |
CN106951879A (en) * | 2017-03-29 | 2017-07-14 | 重庆大学 | Multiple features fusion vehicle checking method based on camera and millimetre-wave radar |
CN107609522A (en) * | 2017-09-19 | 2018-01-19 | 东华大学 | A kind of information fusion vehicle detecting system based on laser radar and machine vision |
CN108037505A (en) * | 2017-12-08 | 2018-05-15 | 吉林大学 | A kind of night front vehicles detection method and system |
CN108764108A (en) * | 2018-05-22 | 2018-11-06 | 湖北省专用汽车研究院 | A kind of Foregut fermenters method based on Bayesian inference |
KR102069843B1 (en) * | 2018-08-31 | 2020-01-23 | 서강대학교 산학협력단 | Apparatus amd method for tracking vehicle |
CN111368706A (en) * | 2020-03-02 | 2020-07-03 | 南京航空航天大学 | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision |
Non-Patent Citations (3)
Title |
---|
HEONG-TAE KIM等: "《Vehicle recognition based on radar and vision sensor fusion for automatic emergency braking》", 《2013 13TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2013)》 * |
张辉等: "《车路协同***中的车辆精确定位方法研究》", 《公路交通科技》 * |
王楠: "《基于多视觉特征融合的后方车辆检测技术研究》", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114624683A (en) * | 2022-04-07 | 2022-06-14 | 苏州知至科技有限公司 | Calibration method for external rotating shaft of laser radar |
Also Published As
Publication number | Publication date |
---|---|
CN113313041B (en) | 2022-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110175576B (en) | Driving vehicle visual detection method combining laser point cloud data | |
Nieto et al. | Road environment modeling using robust perspective analysis and recursive Bayesian segmentation | |
Kim et al. | Automatic car license plate extraction using modified generalized symmetry transform and image warping | |
CN109085570A (en) | Automobile detecting following algorithm based on data fusion | |
CN111461134A (en) | Low-resolution license plate recognition method based on generation countermeasure network | |
CN108596058A (en) | Running disorder object distance measuring method based on computer vision | |
CN112861748B (en) | Traffic light detection system and method in automatic driving | |
CN113111707B (en) | Front car detection and ranging method based on convolutional neural network | |
CN107796373A (en) | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven | |
Schwarzinger et al. | Vision-based car-following: detection, tracking, and identification | |
CN114495064A (en) | Monocular depth estimation-based vehicle surrounding obstacle early warning method | |
CN112862858A (en) | Multi-target tracking method based on scene motion information | |
CN110733039A (en) | Automatic robot driving method based on VFH + and vision auxiliary decision | |
CN113313041B (en) | Information fusion-based front vehicle identification method and system | |
Hussain et al. | Multiple objects tracking using radar for autonomous driving | |
Rasib et al. | Pixel level segmentation based drivable road region detection and steering angle estimation method for autonomous driving on unstructured roads | |
CN117111055A (en) | Vehicle state sensing method based on thunder fusion | |
Nath et al. | On road vehicle/object detection and tracking using template | |
Kumar et al. | An efficient approach for highway lane detection based on the Hough transform and Kalman filter | |
CN113221739A (en) | Monocular vision-based vehicle distance measuring method | |
Clady et al. | Cars detection and tracking with a vision sensor | |
CN116311136A (en) | Lane line parameter calculation method for driving assistance | |
CN111160231A (en) | Automatic driving environment road extraction method based on Mask R-CNN | |
CN116643291A (en) | SLAM method for removing dynamic targets by combining vision and laser radar | |
CN115497073A (en) | Real-time obstacle camera detection method based on fusion of vehicle-mounted camera and laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |