CN108460323B - Rearview blind area vehicle detection method fusing vehicle-mounted navigation information - Google Patents

Rearview blind area vehicle detection method fusing vehicle-mounted navigation information Download PDF

Info

Publication number
CN108460323B
CN108460323B CN201711478171.5A CN201711478171A CN108460323B CN 108460323 B CN108460323 B CN 108460323B CN 201711478171 A CN201711478171 A CN 201711478171A CN 108460323 B CN108460323 B CN 108460323B
Authority
CN
China
Prior art keywords
detection
vehicle
model
navigation information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711478171.5A
Other languages
Chinese (zh)
Other versions
CN108460323A (en
Inventor
王小刚
倪如金
卢金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Desay SV Automotive Co Ltd
Original Assignee
Huizhou Desay SV Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Desay SV Automotive Co Ltd filed Critical Huizhou Desay SV Automotive Co Ltd
Priority to CN201711478171.5A priority Critical patent/CN108460323B/en
Publication of CN108460323A publication Critical patent/CN108460323A/en
Application granted granted Critical
Publication of CN108460323B publication Critical patent/CN108460323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The vehicle detection method for the rearview blind area fusing the vehicle navigation information provides a solution for fusing the vehicle navigation and the vehicle detection algorithm of the rearview blind area, combines scene information and weather conditions provided by navigation, and the vehicle detection algorithm selects different model combinations and parameters according to different environments in a self-adaptive manner, so that the vehicle detection algorithm is better adapted to complicated and variable conditions, has higher detection precision and efficiency, and can be better applied to the automotive electronics industry.

Description

Rearview blind area vehicle detection method fusing vehicle navigation information
Technical Field
The application relates to a blind area vehicle detection method, in particular to a rearview blind area vehicle detection method fusing vehicle navigation information.
Background
With the rapid increase of the automobile holding capacity, the safety of automobile driving is generally concerned, and people are increasingly eager for the safety and convenience brought by the technology. Therefore, the ADAS system of the vehicle is deeply researched, widely applied to the automotive electronics industry, and developed into the core technology of automotive electronics. Because the machine vision can clearly capture the information around the automobile body, the machine vision has better resolving capability on the information such as the color, the texture and the like of an object, and can effectively identify vehicles, pedestrians, traffic polices and the like around the automobile body, the intelligent vision module is applied to the automobile, is a very competitive solution of the current driving auxiliary system, and has great market prospect. However, in a complex scene and a weather change, the processing difficulty of the visual algorithm is increased, and the overall performance index is affected.
Disclosure of Invention
The invention provides a rearview blind area vehicle detection method fusing vehicle-mounted navigation information to overcome at least one defect in the prior art.
The present invention aims to solve the above technical problem at least to some extent.
The invention aims to improve the detection precision and efficiency.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a rearview blind area vehicle detection method fusing vehicle-mounted navigation information comprises the following steps:
s1, starting detection;
s2, receiving navigation information and the preprocessed rear view image and initializing a model;
s3, selecting and combining models;
s4, performing primary level pixel detection by using the combined model I, judging whether a target exists, ending the detection if no target exists, and performing step S5 if the target exists, wherein the model I is a model trained by using pixel level characteristics;
s5, performing middle-level hierarchical edge detection by using the combined model II, judging whether a target exists, ending the detection if no target exists, and performing the step S6 if the target exists, wherein the model II is a model trained by using edge characteristics;
s6, performing advanced hierarchical structure detection by using the combined model III, judging whether a target exists, ending the detection if no target exists, and performing the step S7 if the target exists, wherein the model III is a model trained by using combined edge characteristics;
s7, carrying out data fusion on the detected information and the navigation information;
and S8, finishing detection.
Further, the preprocessing of the rear view image in step S2 includes the following steps:
s21, inputting original rear view image data, wherein the original rear view image data comprises left side and right side data information;
s22, generating an interested range in the image according to the calibration parameters and the actual blind area requirement specification, and calculating the distance and the direction between each image data point and the camera;
s23, executing an adaptive distortion correction algorithm: the calibration parameters, the image data points, the distance from the camera and the direction are integrated to complete the mapping process of each point in the interested range;
s24, executing a view transformation algorithm: through the image transformation matrix, the observed interesting range data can be in the optimal detection state;
and S25, obtaining a final image to be detected, and sending the final image to be detected to a detection module for detection.
Further, in step S4, the primary hierarchical pixel detection is to screen out a candidate area where the vehicle is located, and the selected feature is luminance information in the Y channel, which specifically includes the following steps:
s41, taking the post-view interested sub image block in the pre-processed post-view image as the input of the model;
s42, establishing a pyramid hierarchy: completing detection of targets with different sizes, and searching possible target positions from top to bottom and from left to right;
s43, judging and comparing the pixel position (x, y, width, height) in the pyramid of the ith layer with the model data, and obtaining a corresponding score;
s44, combining the scores at each position on each level of pyramid, normalizing to be within the range of 0-255, and obtaining a probability distribution map;
s45, filtering and connected region segmentation are carried out aiming at the probability distribution map, and the purpose is to smooth the image and remove noise points to obtain each sub-region;
and S46, performing proper external expansion on the candidate sub-regions, sorting the candidate sub-regions according to the scores, and deciding the detection priority of the candidate regions.
Further, in step S5, the data processed by the middle-level edge detection is a candidate region frame output by the primary-level pixel detection, the selected feature is edge gradient information, and a decision is made to obtain a sub-block where each target is located.
Further, in step S6, the high-level hierarchical structured detection is to confirm each sub-block on the basis of the output of the medium-level hierarchical edge detection, remove the false alarm on the basis of ensuring that the target is effectively detected, and select the feature as a structured feature, specifically, perform weighted combination on the features of the medium-level hierarchical edge detection.
Further, in step S3, the model selection is to select an optimal model and matching parameters according to the environment information in the navigation information.
Further, the navigation information includes road information, scene information, and weather information.
Further, the navigation information includes information data relating to an expressway, an urban road, a forward light, a backward light, and a tunnel.
Further, the application provides a detection system adopting the vehicle detection method for the rearview blind area fusing the vehicle-mounted navigation information, and the detection system comprises a navigation system, a model selection module, a model combination module, a detection module and a fusion module.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: the invention provides a fusion solution method of vehicle navigation and rearview blind area vehicle detection, which selects different models and parameters aiming at different environmental factors and performs model combination on each level, is more beneficial to target detection, improves the detection rate and reduces false alarms. The invention can carry out the target detection process of the level by level at the detection end, the primary level provides the pixel level characteristic detection, the obtained candidate area is used for the edge level characteristic detection of the middle level, the obtained candidate area is applied to the high-level structural characteristic detection, the output information is fused to obtain the final result, and the method can more effectively detect the target.
Drawings
FIG. 1 is a schematic diagram of a detection system.
Fig. 2 is a schematic flow chart of a rearview blind area vehicle detection method fusing vehicle-mounted navigation information.
Fig. 3 is a schematic diagram of a rear-view image preprocessing flow.
FIG. 4 is a diagram illustrating a primary level pixel detection process.
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted; the same or similar reference numerals correspond to the same or similar parts; the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
Referring to the attached drawings, the method for detecting the vehicle in the rearview blind area fusing the vehicle-mounted navigation information comprises the following steps:
s1, starting detection;
s2, receiving navigation information and the preprocessed rear view image and initializing a model;
s3, selecting and combining models;
s4, performing primary level pixel detection by using the combined model and judging whether a target exists, if no target exists, finishing the detection, and if the target exists, performing step S5, wherein the model I is a model trained by using pixel level characteristics;
s5, performing middle-level hierarchical edge detection by using the combined model, judging whether a target exists, ending the detection if no target exists, and performing step S6 if the target exists, wherein the model II is a model trained by using edge characteristics;
s6, performing high-level hierarchical structure detection by using the combined model, judging whether a target exists, ending the detection if no target exists, and performing step S7 if the target exists, wherein the model III is a model trained by using combined edge characteristics;
s7, carrying out data fusion on the detected information and the navigation information;
and S8, finishing detection.
Example 2
Similar to embodiment 1, since the type of the lens used in acquiring the rear view image is a fishery lens, the present embodiment has the advantages that the viewing angle range of the observation is large, and more data are acquired; however, the disadvantage is obvious, there is a large distortion, especially in the far rear-view blind area, the distortion of the vehicle image is large, the distortion is serious, the detection and recognition of the target are affected, and the correction and transformation process is required, so further, the preprocessing of the rear-view image in step S2 includes the following steps:
s21, inputting original rear view image data, wherein the original rear view image data comprises left side and right side data information;
s22, generating an interested range in the image according to the calibration parameters and the actual blind area requirement specification, and calculating the distance and the direction between each image data point and the camera;
s23, executing an adaptive distortion correction algorithm: the calibration parameters, the image data points, the distance from the camera and the direction are integrated to complete the mapping process of each point in the interested range;
s24, executing a view transformation algorithm: through the image transformation matrix, the observed interesting range data can be in the optimal detection state;
and S25, obtaining a final image to be detected, and sending the final image to be detected to a detection module for detection.
Example 3
Similar to embodiment 1-2, further, in step S4, the primary hierarchical pixel detection is to screen out a candidate area where the vehicle is located, and the selected feature is luminance information in the Y channel, which specifically includes the following steps:
s41, taking the post-view interested sub image block in the pre-processed post-view image as the input of the model;
s42, establishing a pyramid hierarchy: completing detection of targets with different sizes, and searching possible target positions from top to bottom and from left to right;
s43, judging and comparing the pixel position (x, y, width, height) in the pyramid of the ith layer with the model data, and obtaining a corresponding score;
s44, combining the scores at each position on each level of pyramid, normalizing to be within the range of 0-255, and obtaining a probability distribution map;
s45, filtering and connected region segmentation are carried out aiming at the probability distribution map, and the purpose is to smooth the image and remove noise points to obtain each sub-region;
and S46, performing proper external expansion on the candidate sub-regions, sorting the candidate sub-regions according to the scores, and deciding the detection priority of the candidate regions.
In step S5, the data of the middle-level edge detection processing is a candidate region frame output by the primary-level pixel detection, the selected feature is edge gradient information, and a decision is made to obtain a sub-block where each target is located.
In step S6, the high-level hierarchical structured detection is to confirm each sub-block on the basis of the output of the medium-level hierarchical edge detection, remove false alarms on the basis of ensuring effective detection of the target, and select the features as structured features, specifically, to perform weighted combination on the features of the medium-level hierarchical edge detection.
The primary level pixel detection design aims at obtaining a candidate region, the selected characteristic is a simple pixel value, a non-target region is conveniently and quickly filtered, and an effective target is reserved; the purpose of the middle-level hierarchical edge detection is to search out the position of a target by utilizing edge gradient characteristics; the high-level hierarchical structured detection design aims to remove false targets, and utilizes the weighted combination of features extracted by the middle-level hierarchical edge detection.
Example 4
This embodiment is similar to embodiments 1-3, and further, in step S3, the model selection is to select an optimal model and matching parameters according to the environmental information in the navigation information.
The navigation information includes road information, scene information, and weather information.
The navigation information includes information data relating to an expressway, an urban road, a forward light, a backward light, and a tunnel.
Example 5
The application provides a detection system adopting the vehicle detection method for the rearview blind area fusing the vehicle-mounted navigation information, which comprises a navigation system, a model selection module, a model combination module, a detection module and a fusion module.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (7)

1. A rearview blind area vehicle detection method fusing vehicle-mounted navigation information is characterized by comprising the following steps: the method comprises the following steps:
s1, starting detection;
s2, receiving navigation information and the preprocessed rear view image and initializing a model;
s3, selecting and combining models, wherein the model selection is to select the optimal model and the matched parameters according to the environmental information in the navigation information;
s4, carrying out pixel detection by using the combined model I, judging whether a target exists, ending the detection if no target exists, and carrying out the step S5 if the target exists, wherein the model I is a model trained by using pixel-level characteristics;
s5, performing edge detection by using the combined model II, judging whether a target exists, ending the detection if no target exists, and performing S6 if a target exists, wherein the model II is a model trained by using edge characteristics;
s6, carrying out structured detection by using the combined model III and judging whether a target exists, if no target exists, finishing the detection, and if yes, carrying out the step S7, wherein the model III is a model trained by using combined edge characteristics;
s7, carrying out data fusion on the detected information and the navigation information;
and S8, finishing detection.
2. The method for detecting the vehicle in the back vision blind area fused with the vehicle-mounted navigation information according to claim 1, characterized in that: in step S2, the preprocessing of the rear view image includes the following steps:
s21, inputting original rear view image data, wherein the original rear view image data comprises left and right rear view image data information;
s22, generating an interested range in the image according to the calibration parameters and the actual blind area requirement specification, and calculating the distance and the direction between each image data point and the camera;
s23, executing an adaptive distortion correction algorithm: the calibration parameters, the image data points, the distance from the camera and the direction are integrated to complete the mapping process of each point in the interested range;
s24, executing a view transformation algorithm: through the image transformation matrix, the observed interesting range data can be in the optimal detection state;
and S25, obtaining a final image to be detected, and sending the final image to be detected to a detection module for detection.
3. The method for detecting the vehicle in the back vision blind area fused with the vehicle-mounted navigation information according to claim 1 or 2, characterized in that: in step S4, the primary hierarchical pixel detection is to screen out a candidate area where the vehicle is located, and the selected feature is luminance information in the Y channel, which specifically includes the following steps:
s41, taking the post-view interested sub image block in the pre-processed post-view image as the input of the model;
s42, establishing a pyramid hierarchy: completing detection of targets with different sizes, and searching possible target positions from top to bottom and from left to right;
s43, judging and comparing the pixel position (x, y, width, height) in the pyramid of the ith layer with the model data, and obtaining a corresponding fraction, wherein the parameter x and y are coordinates of the upper left corner of the pixel block, the parameter width is the width of the pixel block, and the parameter height is the height of the pixel block;
s44, combining the scores at each position on each level of pyramid, normalizing the scores to be within the range of 0-255, and obtaining a probability distribution map;
s45, filtering and connected region segmentation are carried out aiming at the probability distribution map, and the purpose is to smooth the image and remove noise points to obtain each sub-region;
and S46, performing proper external expansion on the candidate sub-regions, sorting the candidate sub-regions according to the scores, and deciding the detection priority of the candidate regions.
4. The method for detecting the vehicle in the rearview blind area fused with the vehicle-mounted navigation information as claimed in claim 3, characterized in that: in step S5, the data processed by the middle-level edge detection is a candidate region frame output by the primary-level pixel detection, the selected feature is edge gradient information, and the middle-level edge detection performs decision-making to obtain a sub-block where each target is located.
5. The method for detecting the vehicle in the rearview blind area fused with the vehicle-mounted navigation information as claimed in claim 4, wherein the method comprises the following steps: in step S6, the high-level hierarchical structured detection is to confirm each sub-block on the basis of the output of the medium-level hierarchical edge detection, remove false alarms on the basis of ensuring effective detection of the target, and select the feature as a structured feature, specifically, to perform weighted combination on the features of the medium-level hierarchical edge detection.
6. The method for detecting the vehicle in the rearview blind area fused with the vehicle-mounted navigation information as claimed in claim 1, characterized in that: the navigation information includes road information, scene information, and weather information.
7. The method for detecting the vehicle in the rearview blind area fused with the vehicle-mounted navigation information as claimed in claim 6, characterized in that: the navigation information includes information data relating to an expressway, an urban road, a forward light, a backward light, and a tunnel.
CN201711478171.5A 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information Active CN108460323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711478171.5A CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711478171.5A CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Publications (2)

Publication Number Publication Date
CN108460323A CN108460323A (en) 2018-08-28
CN108460323B true CN108460323B (en) 2022-05-20

Family

ID=63221219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711478171.5A Active CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Country Status (1)

Country Link
CN (1) CN108460323B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512115A (en) * 2014-09-22 2016-04-20 惠州市德赛西威汽车电子股份有限公司 Vehicle navigation picture processing method
CN106529530A (en) * 2016-10-28 2017-03-22 上海大学 Monocular vision-based ahead vehicle detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312386A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Topological-based localization and navigation
US20110081087A1 (en) * 2009-10-02 2011-04-07 Moore Darnell J Fast Hysteresis Thresholding in Canny Edge Detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512115A (en) * 2014-09-22 2016-04-20 惠州市德赛西威汽车电子股份有限公司 Vehicle navigation picture processing method
CN106529530A (en) * 2016-10-28 2017-03-22 上海大学 Monocular vision-based ahead vehicle detection method

Also Published As

Publication number Publication date
CN108460323A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN101950350B (en) Clear path detection using a hierachical approach
Wu et al. Lane-mark extraction for automobiles under complex conditions
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
US8699754B2 (en) Clear path detection through road modeling
US8611585B2 (en) Clear path detection using patch approach
JP6197291B2 (en) Compound eye camera device and vehicle equipped with the same
US8670592B2 (en) Clear path detection using segmentation-based method
US8890951B2 (en) Clear path detection with patch smoothing approach
US8634593B2 (en) Pixel-based texture-less clear path detection
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
CN108021856B (en) Vehicle tail lamp identification method and device and vehicle
US20100097458A1 (en) Clear path detection using an example-based approach
US20150367781A1 (en) Lane boundary estimation device and lane boundary estimation method
EP1403615B1 (en) Apparatus and method for processing stereoscopic images
JP5180126B2 (en) Road recognition device
JP2006018751A (en) Image processor for vehicle
US20140002655A1 (en) Lane departure warning system and lane departure warning method
Feniche et al. Lane detection and tracking for intelligent vehicles: A survey
Cheng et al. A vehicle detection approach based on multi-features fusion in the fisheye images
US10108866B2 (en) Method and system for robust curb and bump detection from front or rear monocular cameras
CN105069411B (en) Roads recognition method and device
US9558410B2 (en) Road environment recognizing apparatus
JP6847709B2 (en) Camera devices, detectors, detection systems and mobiles
Kim et al. An intelligent and integrated driver assistance system for increased safety and convenience based on all-around sensing
CN108460323B (en) Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant