CN112686209A - Vehicle rear blind area monitoring method based on wheel identification - Google Patents

Vehicle rear blind area monitoring method based on wheel identification Download PDF

Info

Publication number
CN112686209A
CN112686209A CN202110095907.0A CN202110095907A CN112686209A CN 112686209 A CN112686209 A CN 112686209A CN 202110095907 A CN202110095907 A CN 202110095907A CN 112686209 A CN112686209 A CN 112686209A
Authority
CN
China
Prior art keywords
detection
vehicle
target object
image
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110095907.0A
Other languages
Chinese (zh)
Inventor
陈一君
徐洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Iwaysense Intelligent Co ltd
Original Assignee
Shenzhen Iwaysense Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Iwaysense Intelligent Co ltd filed Critical Shenzhen Iwaysense Intelligent Co ltd
Priority to CN202110095907.0A priority Critical patent/CN112686209A/en
Publication of CN112686209A publication Critical patent/CN112686209A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a blind area monitoring method based on wheel monitoring, and belongs to an image identification and detection method. The method comprises the steps of collecting images through a camera arranged at the rear part of a vehicle, judging whether targets (only vehicles with wheels only including cars, trucks, motorcycles and the like) exist at the left side and the right side of the rear part of the vehicle based on an image sequence (the image sequence refers to image data collected by a plurality of continuous time points) algorithm collected by the camera, and judging whether the rear targets are close to the vehicle. The camera is installed at the rear part of the vehicle, the intelligent image analysis software receives the image signals transmitted by the image processor, the vehicle in the dead zone of the vehicle can be detected through intelligent analysis of the algorithm, and when the vehicle exists in the dead zone, the alarm is given to prompt a driver, so that the purpose of reducing the driving risk is achieved.

Description

Vehicle rear blind area monitoring method based on wheel identification
Technical Field
The invention relates to an image identification and detection method, in particular to a vehicle rear blind area monitoring method based on wheel identification.
Background
The rear-end collision is one of the most common accident types in motor vehicle accidents, and the rear vehicle can collide with the front vehicle due to too high speed and no time for braking, so that the rear-end collision can be reduced if the front vehicle can be accelerated or laterally avoided in the process. It should be noted that, because blind areas exist on two sides behind the vehicle, so that a driver in the front vehicle cannot know that the driver is approaching the rear vehicle quickly in time, and thus the rear-end collision is likely to occur due to untimely response, the inventors consider that research and improvement on an intelligent analysis method for monitoring the vehicle in the blind areas behind the vehicle are necessary for reducing the occurrence of the rear-end collision.
Disclosure of Invention
One of the objectives of the present invention is to provide a vehicle rear blind area monitoring method based on wheel identification, so as to solve the technical problems in the prior art that a camera mounted on the rear side of a vehicle cannot judge and monitor the rapid approach of the vehicle, and cannot improve the occurrence frequency of rear-end collisions.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides a vehicle rear blind area monitoring method based on wheel identification, which comprises the following steps: a, acquiring images of the left side and the right side behind a vehicle through a camera, and performing target identification in the current image through a target detection algorithm learned by an Adaboost machine; the target is a wheel;
step B, when a target object appears in the current image, matching the intersection area of each frame of detection frame of the current image, and when the phase angle area of the detection frames of two continuous frames is larger than a threshold value, determining that the target object identified by the detection frames matched with the two current frames is the same target object;
and step C, when the detection results of the multiple frames of images are the same target object and continuously exist in the current image for a period of time, matching the accumulated result with a threshold value, and if the accumulated result is larger than the threshold value, determining that the target object exists behind the vehicle.
The further technical scheme is as follows: and D, outputting an alarm prompt to a cab in the vehicle after the target object is determined to exist behind the vehicle.
The further technical scheme is as follows: and B, when the intersection area of each frame of detection frame of the current image is matched in the step B, whether the target objects in the image acquired by the camera are the same target object or not is judged.
The further technical scheme is as follows: the detection frame in the step B is a rectangular frame arranged by Adaboost machine learning; the Adaboost machine learning is that in a detection area of an image collected by a camera, detection strips are arranged at equal intervals, and therefore a detection frame is generated.
The further technical scheme is as follows: the arrangement mode of the detection frames is that a rectification table from an original image to a virtual visual angle is generated during initialization, then point coordinates of the original image corresponding to the virtual visual angle are obtained based on data on the rectification table, and the point coordinates of the virtual visual angle corresponding to the point coordinates of the original image are solved in a reverse mode.
The further technical scheme is as follows: the detection area is a range of 3M × 3M on the left and right sides of the rear of the vehicle.
The further technical scheme is as follows: and a rectangular frame circumscribed by all detection frames of the Adaboost machine learning arrangement is the ROI layout of the Adaboost machine learning.
The further technical scheme is as follows: the target detection algorithm of Adaboost machine learning in the step A comprises the steps of firstly zooming the detection strips arranged in the detection frame to the image size supported by the current model, then carrying out integral operation on the zoomed image, and respectively solving out a 90-degree integral graph and a 45-degree integral graph; classifying the solved integral graph according to the size of 20 multiplied by 20, wherein the classification is that Adaboost machine learning classification is carried out on each detection strip by moving one pixel at a time; the Adaboost machine learning classification is that the integral values of at most twelve point coordinates are set on an integral graph and multiplied by coefficients in a model respectively for summation, when the integral values are larger than a threshold value of a current weak classifier, the weights of the current weak classifier are accumulated, all weak classifiers of the current strong classifier are calculated in sequence, and finally the results of the weight accumulation of all the weak classifiers are summed; comparing the summed result with the strong classifier threshold, wherein a value greater than the strong classifier threshold indicates that the target object exists in the current 20 × 20 detection region, and otherwise, the target object does not exist.
The further technical scheme is as follows: in the step A, the images of the left side and the right side behind the vehicle are simultaneously acquired through the camera.
The further technical scheme is as follows: in the step A, a single camera is used for collecting images behind the vehicle and virtual images on the left side and the right side are generated through a virtual camera technology.
Compared with the prior art, the invention has the following beneficial effects: the method has the advantages that the wheels are used as target objects in the images for recognition, the accuracy of vehicle recognition at the rear part of the vehicle can be guaranteed, recognition errors are avoided, the target objects in the images are compared frame by frame through the measuring frame, the effectiveness of the target object recognition at the rear part of the vehicle can be guaranteed, the alarm prompt is consistent with the rear-end collision precursor of the rear vehicle, the time interval between the confirmation of the target object recognition and the occurrence of the rear-end collision accident is enough to enable a driver of the front vehicle to carry out avoidance processing, and the occurrence of the rear-end collision accident is reduced to a certain extent.
Drawings
FIG. 1 is a logic flow diagram of a method for illustrating one embodiment of the present invention.
Detailed Description
The invention is further elucidated with reference to the drawing.
Referring to fig. 1, an embodiment of the present invention is a vehicle rear blind area monitoring method based on wheel identification, including the steps of:
s1, acquiring images of the two sides of the rear of the vehicle through a camera, and performing target identification in the current image through a target detection algorithm learned by an Adaboost machine; the target is a wheel; the images on the two sides of the rear of the vehicle can be acquired by the cameras respectively, or the images on the left side and the right side of the rear of the vehicle can be acquired by a single camera and virtual images on the left side and the right side can be generated by a virtual camera technology.
Step S2, when a target object appears in the current image, matching the intersection area of each frame of detection frame of the current image, and when the intersection area of the detection frames of two continuous frames is larger than a threshold value, matching the target object identified by the detection frames of the two current frames to be the same target object;
in the step, the detection frame is a rectangular frame arranged by Adaboost machine learning; and based on the principle of implementation of the present invention, the rectangular frame circumscribed by all the detection frames of the aforementioned Adaboost machine learning layout is the ROI layout of Adaboost machine learning (details about ROI layout will be described below). The Adaboost machine learning is to arrange a detection strip at the same distance in the detection area of the image collected by the camera, thereby generating a detection frame, and the detection area is the range of 3M × 3M on the left and right sides of the rear of the vehicle in the design of the embodiment. Meanwhile, the arrangement mode of the detection frames is that a rectification table from an original image to a virtual visual angle is generated during initialization, then point coordinates of the original image corresponding to the virtual visual angle are obtained based on data on the rectification table, and the point coordinates of the virtual visual angle corresponding to the point coordinates of the original image are solved in a reverse mode.
The virtual viewing angles include two viewing angles, one of which is a rear left side viewing angle of the vehicle, and the other is a rear right side viewing angle of the vehicle. The main purpose of calculating two different virtual visual angles is to detect the vehicles in the left and right sides and two blind areas behind the vehicles at the same time. The virtual visual angle calculated by the method is mainly based on the internal parameters of the camera, the external parameters of the camera (needing manual calibration) and the focal length parameters of the virtual camera. The virtual perspective is calculated by mainly correcting the image of the camera and determining different virtual perspectives by adjusting the virtual focal length of the camera. The view is enlarged after the virtual focal length is enlarged to enable the image of the far virtual visual angle to be more clear, so that a target object existing at the far position can be detected, the view is reduced after the virtual focal length is reduced to enable the image of the near virtual visual angle to be clear, so that the target object existing at the near position can be detected, and the method calculates the proper focal length to detect the wheels within 0.5m-3.5 m.
Step S3, when the detection results of the multi-frame images are the same target object and continuously exist in the current image for a period of time, matching the accumulated result with a threshold value, and if the accumulated result is larger than the threshold value, determining that the target object exists behind the vehicle;
after judging that the target object exists behind the vehicle, continuing to execute step S4;
and step S4, when the target object is determined to exist behind the vehicle, outputting an alarm prompt to a cab in the vehicle.
In this embodiment, since the cameras are adopted to capture images simultaneously, in practice, the cameras are installed on the left and right sides of the rear portion of the vehicle to capture images of blind areas on both sides of the vehicle simultaneously, and specifically, in step S2, when the intersection area matching is performed on each frame of the detection frame of the current image, it is also determined whether the target objects in the images captured by the cameras are the same target object simultaneously.
In the embodiment, the wheels are used for identifying the target objects in the image, so that the accuracy of identifying the rear vehicle of the self vehicle can be ensured, identification errors are avoided, the target objects in the image are compared frame by frame through the measuring frame, the effectiveness of identifying the rear target objects of the self vehicle can be ensured, the alarm prompt is consistent with the rear-end collision precursor of the rear vehicle, the time interval between the confirmation of the identification of the target objects and the occurrence of the rear-end collision accident is enough to enable the driver of the front vehicle to carry out avoidance processing, and the occurrence of the rear-end collision accident is reduced to a certain extent.
In the target detection algorithm of Adaboost machine learning in step S1 of the above embodiment, the Adaboost machine learning is to determine whether a target object exists in the current detection frame according to the virtual perspective image and the trained model, the model of the classifier needs to be read before the Adaboost machine learning calculation is performed, the Adaboost machine learning detection needs to scale the size of the detection frame to a size corresponding to the model, the integral map is calculated from the scaled image, the data of the integral map and the read model data are calculated, if the calculation result is greater than the threshold, the target object exists, and if the calculation result is less than the threshold, the target object does not exist.
Specifically, the arranged test strips are scaled to the size of the image supported by the current model (the size supported by the model is 20 × 20, and the unit is the number of pixels), and the test strips are scaled to the size with the height of 20 and the length of the test strips is scaled to the corresponding scale (for example, the size of the test strips is 50 × 250, and the scaled size is 20 × 100). And (3) performing integral operation on the zoomed image, and solving an integral graph of 90 degrees and an integral graph of 45 degrees respectively (the calculation principle is that the numerical value of the current point coordinate is the sum of pixel values of all points at the upper left corner of the current point coordinate) and the integral graph of 45 degrees (the calculation principle is that the numerical value of the current point coordinate is the sum of all pixel values at the upper corner of the current point coordinate from 45 degrees to 135 degrees). The solved integrograms are classified according to the size of 20 × 20, and Adaboost machine learning classification is performed on each test strip by moving one pixel each time (for example, the 20 × 100 test strip can be moved from left to right by 20 × 20 data each time, and 80 test results can be classified in the current test strip by moving 1 pixel each time). The Adaboost machine learning classification process is that the integral values of at most twelve point coordinates (the number of coordinate points corresponding to different types is obtained according to different feature types) on an integral graph are multiplied by coefficients in a model respectively and then summed. And when the weight of the current weak classifier is larger than the threshold value of the current weak classifier, accumulating the weights of the current weak classifier, sequentially calculating all weak classifiers of the current strong classifier, and summing the weight accumulation results of all the weak classifiers. Comparing the summed result with a strong classifier threshold (the thresholds of the strong classifiers at each level are different), wherein the detection area of 20 × 20 is judged to have the target object if the summed result is greater than the strong classifier threshold, if the detection area is less than the strong classifier threshold, the detection area is judged to have the target object, if the detection area is less than the strong classifier threshold, the model of the algorithm is graded, only if the classification result of the strong classifier at each level is greater than the strong classifier threshold, the current detection area is judged to have the target object, if the classification result of any one level is less than the strong classifier threshold, the subsequent classification calculation is terminated, and the current detection area is output if the current detection area has no target object. And similarly, detecting other detection strips, and finishing the classification of machine learning of the left virtual view angle Adaboost and the right virtual view angle Adaboost. For example, the presence of a vehicle target in a blind area on the left side behind the own vehicle is recognized in the classification result of Adaboost machine learning, and the detected result is drawn on an image. The detection of the current frame is completed.
Further, based on the description of the above embodiment of the present invention, the algorithm for detecting the object in the image in step S1 is exactly as follows:
the target detection mainly comprises two parts of target object detection, wherein one part is the target detection of Adaboost machine learning, and the other part is the target detection for judging whether the detected targets are the same target object.
The target detection of Adaboost machine learning mainly comprises the steps of detecting whether a target object exists in blind areas on the left side and the right side of the rear of a self-vehicle or not and whether the existing target object is the same target object or not. Whether a target object exists is judged mainly according to the result of Adaboost machine learning detection, the existence alarm of the target object mainly carries out continuous tracking detection on a multi-frame image sequence, intersection area matching needs to be carried out on target object detection frames detected by each frame, the target objects detected by the two detection frames are matched only after the phase angle areas of the detection frames of two continuous frames are larger than a threshold value and are the same target object, then multi-frame detection results on continuous tracking are the same detection target, the accumulated result is matched with the threshold value after the multi-frame detection results exist for a period of time, and the target object exists behind the vehicle after the multi-frame detection results are larger than the threshold value.
And matching the target object with the detection result of the Adaboost machine learning target object and the judgment result of the same target object, and considering that the target objects exist in the left and right blind areas behind the self-vehicle only if the two matching results are the same target object.
The target object is detected, and the target object exists in the blind areas on the left side and the right side of the rear of the self-vehicle.
The ROI layout mentioned in step S2 includes two parts, one part is the detection frame layout learned by the Adaboost machine in the rear left side perspective detection region of the vehicle, and the other part is the detection frame layout learned by the Adaboost machine in the rear right side perspective detection region of the vehicle.
The detection frame layout of Adaboost machine learning mainly determines the position on a virtual viewing angle according to an actual detection area, the layout of the size and proportion of the detection frame needs to be determined according to the type of a detection target (the current detection type is a wheel, so the size of the detection frame is the size distribution according to the actual wheel), and different detection frames are initialized on different detection viewing angles according to the actual detection area and the type of the detection target. The position and size of the actual detection area corresponding to the virtual visual angle are calculated according to the internal and external parameters of the camera and the focal length of the virtual camera, and different detection frames are arranged on different virtual visual angles. And arranging the position of the detection frame and the size and proportion of the detection frame for detecting the type of the target object according to the distance from the vehicle on the ROI layout. And after the arrangement of the detection frames is finished, the circumscribed rectangles of all the detection frames are the ROI layout of the adaboost machine learning. Further, the specific arrangement of the detection frames is calculated as follows:
firstly, a rectification table from the original image to the virtual visual angle is generated during initialization (data stored in the rectification table is coordinate values of the pixel value of the current position on the original view, for example, if the coordinate point (x, y) on the rectification table corresponds to the coordinate on the rectification table being (u, v), then the data stored at the (x, y) position of the rectification table is u x w-v (w is the width of the original image)), and the subsequent calculation is to obtain the point coordinate of the original image corresponding to the point coordinate on the virtual visual angle according to the data of the rectification table, and similarly, the point coordinate on the virtual visual angle corresponding to the point coordinate on the original image can be reversely solved. The detection range of the algorithm is a detection range of 3M × 3M from the left rear of the vehicle and a detection range of 3M × 3M from the right rear of the vehicle, the left width 3M is a range from 0.5M on the left side to 3.5M on the left side, and the right width 3M is a range from 0.5M on the right side to 3.5M on the right side. The detection area is 3M long from the vehicle tail 0M to the rear 3M of the vehicle. When calculating the detection frame of Adaboost machine learning, arranging one detection strip every 1M from the left and right sides of the vehicle by 0.5M. The positions of the detection strips are arranged according to the rule that a detection strip is arranged every 1M in the real world in a detection area, the positions of the detection strips are converted into an original image, the positions of coordinate points of the original image corresponding to world coordinates can be obtained, the positions of detection frames arranged on the world coordinates corresponding to the coordinate positions of virtual visual angle images can be obtained by converting the positions of the detection strips into the virtual visual angles according to the rectification table, the height of the detection frames is determined according to the height of an actual detection target in the same way, the heights of the detection frames arranged in a world coordinate system are converted into the heights corresponding to the virtual visual angles, and the ROI layout for learning of an Adaboost machine can be completed.
Meanwhile, the alarm logic in the above step S4 is further explained as follows:
the alarm logic processing is that the alarm is given when target objects exist in the blind areas at the left side and the right side of the rear of the vehicle.
And the alarm logic for the target object in the blind area detects and counts the multi-frame image sequence according to Adaboost machine learning, and judges whether the target object exists according to the counted result. Adaboost machine learning performs target object detection for each frame of an image sequence. Judging whether a target object exists in a current frame according to the result of Adaboost machine learning detection, comparing the detection results of the previous frame of image in the turn when the target object exists, matching the intersection areas of the detection results of the two frames when the detection result of the current frame also exists in the target object, and considering that the target objects existing in the two continuous frames of images are the same target object when the matching result is greater than a threshold value. Only if the detection results of two continuous frames are the same target object, the counting of the target existence is accumulated, and when the counting result is larger than the threshold value, the target object exists in the blind areas on the left side and the right side behind the self-vehicle.
In conclusion, whether the target object exists in the blind areas on the left side and the right side of the rear of the self-vehicle is prompted according to the statistical result. The driver of the self-vehicle can take corresponding measures to avoid in time after being prompted by the alarm so as to reduce the occurrence of rear-end collision accidents.
In addition to the foregoing, it should be further appreciated that reference throughout this specification to "one embodiment," "another embodiment," "an embodiment," or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally herein. The appearances of the same phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the scope of the invention to effect such feature, structure, or characteristic in connection with other embodiments.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More specifically, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, other uses will also be apparent to those skilled in the art.

Claims (10)

1. A vehicle rear blind area monitoring method based on wheel identification is characterized by comprising the following steps:
a, acquiring images of the left side and the right side behind a vehicle through a camera, and performing target identification in the current image through a target detection algorithm learned by an Adaboost machine; the target is a wheel;
step B, when a target object appears in the current image, matching the intersection area of each frame of detection frame of the current image, and when the intersection area of the detection frames of two continuous frames is larger than a threshold value, determining that the target object identified by the detection frames matched with the two current frames is the same target object;
and step C, when the detection results of the multiple frames of images are the same target object and continuously exist in the current image for a period of time, matching the accumulated result with a threshold value, and if the accumulated result is larger than the threshold value, determining that the target object exists behind the vehicle.
2. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 1, wherein: and D, outputting an alarm prompt to a cab in the vehicle after the target object is determined to exist behind the vehicle.
3. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 1, wherein: and B, when the intersection area of each frame of detection frame of the current image is matched in the step B, whether the target objects in the image acquired by the camera are the same target object or not is judged.
4. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 1, wherein: the detection frame in the step B is a rectangular frame arranged by Adaboost machine learning; the Adaboost machine learning is that in a detection area of an image collected by a camera, detection strips are arranged at equal intervals, and therefore a detection frame is generated.
5. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 4, wherein: the arrangement mode of the detection frames is that a rectification table from an original image to a virtual visual angle is generated during initialization, then point coordinates of the original image corresponding to the virtual visual angle are obtained based on data on the rectification table, and the point coordinates of the virtual visual angle corresponding to the point coordinates of the original image are solved in a reverse mode.
6. The vehicle rear blind area monitoring method based on wheel identification according to claim 4 or 5, characterized in that: the detection area is a range of 3M × 3M on the left and right sides of the rear of the vehicle.
7. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 4, wherein: and a rectangular frame circumscribed by all detection frames of the Adaboost machine learning arrangement is the ROI layout of the Adaboost machine learning.
8. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 1, wherein: the target detection algorithm of Adaboost machine learning in the step A comprises the steps of firstly zooming the detection strips arranged in the detection frame to the image size supported by the current model, then carrying out integral operation on the zoomed image, and respectively solving out a 90-degree integral graph and a 45-degree integral graph;
classifying the solved integral graph according to the size of 20 multiplied by 20, wherein the classification is that Adaboost machine learning classification is carried out on each detection strip by moving one pixel at a time;
the Adaboost machine learning classification is that the integral values of at most twelve point coordinates are set on an integral graph and are multiplied by coefficients in a model respectively for summation, when the integral values are larger than a threshold value of a current weak classifier, the weights of the current weak classifier are accumulated, all weak classifiers of the current strong classifier are calculated in sequence, and finally the results of the weight accumulation of all the weak classifiers are summed;
comparing the summed result with the strong classifier threshold, wherein a value greater than the strong classifier threshold indicates that the target object exists in the current 20 × 20 detection region, and otherwise, the target object does not exist.
9. The vehicle rear blind area monitoring method based on wheel identification as claimed in claim 1, wherein: in the step A, the images of the left side and the right side behind the vehicle are simultaneously acquired through the camera.
10. The vehicle rear blind area monitoring method based on wheel identification according to claim 1, characterized in that: in the step A, a single camera is used for collecting images behind the vehicle and virtual images on the left side and the right side are generated through a virtual camera technology.
CN202110095907.0A 2021-01-25 2021-01-25 Vehicle rear blind area monitoring method based on wheel identification Pending CN112686209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110095907.0A CN112686209A (en) 2021-01-25 2021-01-25 Vehicle rear blind area monitoring method based on wheel identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110095907.0A CN112686209A (en) 2021-01-25 2021-01-25 Vehicle rear blind area monitoring method based on wheel identification

Publications (1)

Publication Number Publication Date
CN112686209A true CN112686209A (en) 2021-04-20

Family

ID=75459111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110095907.0A Pending CN112686209A (en) 2021-01-25 2021-01-25 Vehicle rear blind area monitoring method based on wheel identification

Country Status (1)

Country Link
CN (1) CN112686209A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010198361A (en) * 2009-02-25 2010-09-09 Denso Corp Detection target determination device and integral image generating device
WO2011092865A1 (en) * 2010-02-01 2011-08-04 株式会社モルフォ Object detection device and object detection method
JP2014145666A (en) * 2013-01-29 2014-08-14 Aisin Aw Co Ltd Travel guide system, travel guide method, and computer program
US9092695B1 (en) * 2013-03-13 2015-07-28 Google Inc. High-accuracy real-time road sign detection from images
CN105984407A (en) * 2015-05-15 2016-10-05 李俊 Monitoring and warning device for condition behind vehicle
CN107633199A (en) * 2017-08-07 2018-01-26 浙江工业大学 A kind of apple picking robot fruit object detection method based on deep learning
CN107796373A (en) * 2017-10-09 2018-03-13 长安大学 A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN108647638A (en) * 2018-05-09 2018-10-12 东软集团股份有限公司 A kind of vehicle location detection method and device
CN109727273A (en) * 2018-12-29 2019-05-07 北京茵沃汽车科技有限公司 A kind of Detection of Moving Objects based on vehicle-mounted fisheye camera
CN110210363A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of target vehicle crimping detection method based on vehicle-mounted image
CN110909781A (en) * 2019-11-14 2020-03-24 长安大学 Vehicle detection method based on vehicle rearview mirror
EP3663978A1 (en) * 2018-12-07 2020-06-10 Thinkware Corporation Method for detecting vehicle and device for executing the same
CN111507196A (en) * 2020-03-21 2020-08-07 杭州电子科技大学 Vehicle type identification method based on machine vision and deep learning
CN111638711A (en) * 2020-05-22 2020-09-08 北京百度网讯科技有限公司 Driving track planning method, device, equipment and medium for automatic driving
CN212229699U (en) * 2020-06-24 2020-12-25 深圳市艾为智能有限公司 Rear vehicle fast approaching video recording device based on camera

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010198361A (en) * 2009-02-25 2010-09-09 Denso Corp Detection target determination device and integral image generating device
WO2011092865A1 (en) * 2010-02-01 2011-08-04 株式会社モルフォ Object detection device and object detection method
JP2014145666A (en) * 2013-01-29 2014-08-14 Aisin Aw Co Ltd Travel guide system, travel guide method, and computer program
US9092695B1 (en) * 2013-03-13 2015-07-28 Google Inc. High-accuracy real-time road sign detection from images
CN105984407A (en) * 2015-05-15 2016-10-05 李俊 Monitoring and warning device for condition behind vehicle
CN107633199A (en) * 2017-08-07 2018-01-26 浙江工业大学 A kind of apple picking robot fruit object detection method based on deep learning
CN107796373A (en) * 2017-10-09 2018-03-13 长安大学 A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN108647638A (en) * 2018-05-09 2018-10-12 东软集团股份有限公司 A kind of vehicle location detection method and device
EP3663978A1 (en) * 2018-12-07 2020-06-10 Thinkware Corporation Method for detecting vehicle and device for executing the same
CN109727273A (en) * 2018-12-29 2019-05-07 北京茵沃汽车科技有限公司 A kind of Detection of Moving Objects based on vehicle-mounted fisheye camera
CN110210363A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of target vehicle crimping detection method based on vehicle-mounted image
CN110909781A (en) * 2019-11-14 2020-03-24 长安大学 Vehicle detection method based on vehicle rearview mirror
CN111507196A (en) * 2020-03-21 2020-08-07 杭州电子科技大学 Vehicle type identification method based on machine vision and deep learning
CN111638711A (en) * 2020-05-22 2020-09-08 北京百度网讯科技有限公司 Driving track planning method, device, equipment and medium for automatic driving
CN212229699U (en) * 2020-06-24 2020-12-25 深圳市艾为智能有限公司 Rear vehicle fast approaching video recording device based on camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
R. LIENHART 等: "An Extended Set of Haar-like Features for Rapid Object Detection", 《PROCEEDINGS. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》, 30 September 2002 (2002-09-30) *
朱善玮 等: "基于Haar-like和AdaBoost的车脸检测", 《电子科技》, vol. 31, no. 08, 31 July 2018 (2018-07-31), pages 66 - 68 *
李岳: "基于机器视觉的道路和车辆检测技术研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, no. 2019, 15 May 2019 (2019-05-15), pages 034 - 488 *
李羊 等: "复杂背景下的快速车牌定位技术研究", 《辽宁工业大学学报(自然科学版)》, vol. 36, no. 02, 30 April 2016 (2016-04-30), pages 81 - 86 *

Similar Documents

Publication Publication Date Title
CN109829403B (en) Vehicle anti-collision early warning method and system based on deep learning
US5554983A (en) Object recognition system and abnormality detection system using image processing
US8379928B2 (en) Obstacle detection procedure for motor vehicle
WO2019116958A1 (en) Onboard environment recognition device
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device
CN110287905B (en) Deep learning-based real-time traffic jam area detection method
KR100459476B1 (en) Apparatus and method for queue length of vehicle to measure
EP1796043A2 (en) Object detection
US8406472B2 (en) Method and system for processing image data
CN112349144B (en) Monocular vision-based vehicle collision early warning method and system
CN113255481A (en) Crowd state detection method based on unmanned patrol car
CN109727273B (en) Moving target detection method based on vehicle-mounted fisheye camera
CN110069990B (en) Height limiting rod detection method and device and automatic driving system
CN112258668A (en) Method for detecting roadside vehicle parking behavior based on high-position camera
CN113850872A (en) Service area parking line pressing detection method based on high-level video
CN110992693A (en) Deep learning-based traffic congestion degree multi-dimensional analysis method
CN113370977A (en) Intelligent vehicle forward collision early warning method and system based on vision
EP2741234B1 (en) Object localization using vertical symmetry
CN107516423B (en) Video-based vehicle driving direction detection method
CN114582132B (en) Vehicle collision detection early warning system and method based on machine vision
CN112070039B (en) Hash code-based vehicle collision detection method and system
CN106570487A (en) Method and device for predicting collision between objects
CN111497741A (en) Collision early warning method and device
CN113657265B (en) Vehicle distance detection method, system, equipment and medium
US20230237809A1 (en) Image processing device of person detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination