CN110378202B - Omnidirectional pedestrian collision early warning method based on fisheye lens - Google Patents

Omnidirectional pedestrian collision early warning method based on fisheye lens Download PDF

Info

Publication number
CN110378202B
CN110378202B CN201910487362.0A CN201910487362A CN110378202B CN 110378202 B CN110378202 B CN 110378202B CN 201910487362 A CN201910487362 A CN 201910487362A CN 110378202 B CN110378202 B CN 110378202B
Authority
CN
China
Prior art keywords
vehicle
human body
axis
coordinate system
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910487362.0A
Other languages
Chinese (zh)
Other versions
CN110378202A (en
Inventor
刘鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motovis Technology Shanghai Co ltd
Original Assignee
Motovis Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motovis Technology Shanghai Co ltd filed Critical Motovis Technology Shanghai Co ltd
Priority to CN201910487362.0A priority Critical patent/CN110378202B/en
Publication of CN110378202A publication Critical patent/CN110378202A/en
Application granted granted Critical
Publication of CN110378202B publication Critical patent/CN110378202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an omnibearing pedestrian collision early warning method based on a fisheye lens, which comprises the following steps: step a: a plurality of fisheye lenses with specified angles are arranged around the vehicle, and the visual angle ranges of the adjacent fisheye lenses are at least partially overlapped; step b: acquiring pedestrian images around the vehicle in real time through the plurality of fisheye lenses; step c: calculating human body semantic segmentation on the image acquired in real time by using a trained classifier to obtain the positions of different part segmentation areas of the same human body in an image coordinate system; step d: c, converting the position information obtained in the step c from an image coordinate system to a world coordinate system through a projection transformation matrix to obtain the corresponding position of the target human body or the target human body part on an XY plane; step e: calculating the Euclidean distance and other numerical values of the target contour coordinate and/or the 2D frame coordinate and the center point of the vehicle in a world coordinate system during tracking; and step f: and e, evaluating the early warning grade according to the data obtained by calculation in the step e.

Description

Omnidirectional pedestrian collision early warning method based on fisheye lens
Technical Field
The invention relates to the field of machine vision, in particular to an all-dimensional pedestrian collision early warning method based on a fisheye lens.
Background
Pedestrian detection is an important link in the field of safe driving, because pedestrian movement uncertainty is extremely high, clothes fabrics with different heights, fatness and thinness and various patterns and various postures of all age groups are possible to appear, the walking speed is variable at any time, the track is also in a relatively definite direction unlike other types of targets, automobile driving needs to be carefully processed under the condition of dense crowds, and one third of the accidents caused by traffic accidents in China is pedestrians. In order to solve this problem, a high-end driving assistance system or an automatic driving system is required to have the capability of giving advance warning or directly interfering with dangerous situations and erroneous man-made driving behaviors, which corresponds to the function of warning the collision of pedestrians ahead.
Laser and infrared sensor need not illuminate the surrounding environment and just can accomplish pedestrian's detection passively, and though this type of sensor receives the interference of illumination less but in view of the price problem, are unsuitable for large tracts of land and promote the volume production, are replaced by vision sensor more and more, and still one kind is detecting the pedestrian using millimeter wave band radar, although this type of sensor is sensitive accurate, but receives weather influence great, can appear the false retrieval on the contrary under the more dangerous weather condition of rainy day and foggy day this kind.
At present, more pedestrian detection is generally based on the equipment of a set of camera under, this set of camera generally contains the camera (be used for long-range pedestrian detection early warning) that the focus is longer can see within 150 meters but the visual angle is less, the camera (be used for within 60 meters closely pedestrian detection early warning) that a focus is shorter but the visual angle is great and a rear of a vehicle is used for the camera that the visual angle is less apart from also being shorter to road surface control when backing a car. Therefore, a great deal of patents or research papers mostly pay attention to pedestrian detection outside the blind area of the driver, and the algorithm also calculates an approximate distance aiming at detection of the whole human body or the head of the human body. In addition, a plurality of fisheye cameras are arranged on the vehicle body in a circle, but the fisheye cameras are more used for the automatic parking function and are not used for detecting the direction of the pedestrians, the main reason is that the images collected by the fisheye lenses are greatly distorted, straight outlines can be obtained for targets with simple lines (such as lane lines, vehicle location lines and the like) through distortion correction technology, and the targets with complicated textures, lines and outlines such as the pedestrians are not suitable for distortion correction, so that the visual angle of the fisheye lenses meets the requirement of a large visual angle and limits the types of the detected targets.
Therefore, a novel pedestrian early warning method based on the full-vehicle all-around fisheye lens is provided based on the problem of the conventional lens vision blind area and the pedestrian recognition problem generated by the combined action of the fisheye camera on the problem that complex textures are not suitable for distortion correction.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an all-dimensional pedestrian collision early warning method based on a fisheye lens, the problem of monitoring the road conditions around the whole vehicle can be solved through the fisheye camera arranged on the periphery of the vehicle, meanwhile, the problem that pedestrian images are mutually covered and only part of human bodies appear and the like are difficult to identify can be well solved by using a human body semantic segmentation method based on a deep neural network, and on the basis, a more accurate pedestrian distance is obtained, so that the early warning function of the system is more comprehensive and accurate.
The invention discloses a pedestrian early warning method based on a full-vehicle all-around fisheye lens, which comprises the following steps of:
step a: a plurality of fisheye lenses with specified angles are arranged around the vehicle, and the visual angle ranges of the adjacent fisheye lenses are at least partially overlapped;
step b: acquiring pedestrian images around the vehicle in real time through the plurality of fisheye lenses;
step c: calculating human body semantic segmentation on the image acquired in real time by using a trained classifier to obtain the positions of segmentation areas of different parts of the same human body in an image coordinate system;
step d: c, converting the position information obtained in the step c from an image coordinate system to a world coordinate system through a projection transformation matrix to obtain the corresponding position of the target human body or the target human body part on an XY plane;
step e: continuously and dynamically tracking the target human body or the target human body part by a pedestrian tracking method until the target human body or the target human body part leaves a monitoring range, and calculating Euclidean distance between the target contour coordinate and/or the 2D frame coordinate and the center point of the vehicle, vertical distance in the X-axis direction, vertical distance in the Y-axis direction, and relative speed and acceleration in the X-axis/Y-axis direction in a world coordinate system during tracking; and
step f: and e, evaluating the early warning grade according to the data obtained by calculation in the step e.
Preferably, in step a, four fisheye lenses are respectively mounted in the middle of front and rear bumpers and under left and right reflectors of the vehicle, and the fisheye lenses are preferably capable of observing the ground close to the vehicle body.
Preferably, in step c, the human body may be divided into an upper body or a lower body, or further divided into a head, an upper body, upper limbs, a lower body and lower limbs, or further divided into a head, hands, feet, upper limbs, lower limbs, an upper body and a lower body according to actual needs.
Preferably, in step c, the classifier is trained as follows:
using a vehicle provided with a plurality of fisheye lenses to collect a large number of images around the vehicle as a training data set, wherein the images around the vehicle comprise high-density pedestrian crowd images, ultra-close-range human body images, special human body posture images and incomplete human body images;
and (3) performing human semantic learning on the images in the training data set by using the deep neural network to obtain a classifier with optimal semantic segmentation robustness.
Preferably, in step c, the position includes: human body shape outline, human body 2D frame, human body 3D frame, human body different part shape outline, human body different part 2D frame.
Preferably, in step e, the first step,
the Euclidean distance is calculated by the following method:
step e1: searching the coordinate pI of a point on the contour or the 2D frame closest to the X axis under the image coordinate system;
step e2: converting the point coordinate into pW under a world coordinate system through projection transformation;
step e3: calculating the absolute value of the difference value between the x and y values of the pW point and the x and y values of the central coordinate point of the vehicle;
step e4: the square of the sum of the squares of the absolute x and absolute y values is calculated,
the X-axis direction vertical distance is calculated by the following method: starting from the Euclidean distance step e3, calculating the absolute value of the difference value from the X value of the pW point to the X axis of the world coordinate system,
the Y-axis direction vertical distance is calculated by the following method: starting from the Euclidean distance step e3, calculating the absolute value of the difference value between the Y value of the pW point and the Y axis of the world coordinate system,
the relative speed and the acceleration in the X-axis/Y-axis direction are calculated by the following method: vx is the ratio of the vertical distance in the X-axis direction to the time t used for specifying the frame, vy is the ratio of the vertical distance in the Y-axis direction to the time t used for specifying the frame, ax is the ratio of Vx to the time t used for specifying the frame, and Ay is the ratio of Vy to the time t used for specifying the frame.
Preferably, in step d, the world coordinate system is established as follows: taking a vertical projection point of a vehicle central point on the ground as an original point, translating the original point and a vehicle longitudinal axis into an X axis, taking the direction as a vehicle advancing direction, translating the original point and a vehicle transverse axis into a Y axis, taking the direction as the right, and vertically upwards passing the original point as a Z axis; and calibrating the external reference coefficient of the internal reference coefficient of each fisheye lens.
Preferably, a projection transformation matrix from the image coordinate system to the world coordinate system is obtained by calculation according to the internal reference coefficient and the external reference coefficient of the fisheye lens.
Preferably, in the step f, the highest-level early warning is given to all human body targets detected within one meter around the vehicle body of the vehicle, no matter how much the current vehicle speed of the vehicle is.
Preferably, in step f, when the vehicle speed is greater than zero, for the detected human body targets around the vehicle body beyond one meter, the collision times Tx and Ty in the X direction and the Y direction are calculated according to the values obtained in step e:
evaluating an early warning grade according to the minimum value of Tx and Ty, wherein the shorter the time is, the higher the early warning grade is;
when the distance from the target to the X axis or the Y axis in the designated frame is always unchanged or the change range is very small, the target is not warned,
this collision time is 0;
when the distance from the target to the X axis or the Y axis in the designated frame shows a trend of gradually getting away from the target, the target is not warned, and the collision time is increased.
The invention has the following beneficial effects: the method not only greatly improves the precision in measuring the distance of the pedestrians, greatly improves the robustness in the aspect of the accuracy of human body target detection, but also expands the monitoring visual angle to the periphery of the whole vehicle and expands the monitoring distance to be tightly attached to the vehicle body for one circle.
Drawings
Fig. 1 is a flowchart of an omnidirectional pedestrian collision warning method based on a fisheye lens according to an embodiment of the invention.
FIG. 2 is a side view of a viewable area of an embodiment of the present invention.
FIG. 3 is a schematic view of a front viewing area of one embodiment of the present invention.
FIG. 4 is a top view of a viewable area of one embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are intended only for a better understanding of the contents of the study of the invention and are not intended to limit the scope of the invention.
Fig. 1 shows a pedestrian warning method based on a full-vehicle all-round looking fisheye lens according to another embodiment of the invention, which includes the following steps a-f. The steps are explained in detail below.
A, step a: a plurality of fisheye lenses with specified angles are arranged on the periphery of a vehicle, and the visual angle ranges of the adjacent fisheye lenses are at least partially overlapped. In this embodiment, the number of the fisheye lenses is four, and the fisheye lenses are respectively installed in the middle of front and rear bumpers of a vehicle and below left and right reflectors, and the fisheye lenses are preferably capable of observing the ground attached to a vehicle body. In other embodiments, the number and specific installation positions of the fisheye lenses can be changed according to actual needs.
Step b: and acquiring pedestrian images around the vehicle in real time through the plurality of fisheye lenses.
Step c: and calculating human body semantic segmentation on the image acquired in real time by using the trained classifier, and obtaining the positions of the segmented regions of different parts of the same human body in an image coordinate system. In this step, the human body may be divided into an upper half or a lower half, or further divided into a head, an upper half, upper limbs, a lower half, and lower limbs, or further divided into a head, hands, feet, upper limbs, lower limbs, an upper half, and a lower half, according to actual needs.
In step c, the classifier is trained as follows:
using a vehicle provided with a plurality of fisheye lenses to collect a large number of images around the vehicle as a training data set, wherein the images around the vehicle comprise high-density pedestrian crowd images, ultra-close range human body images, special human body posture images and incomplete human body images;
and performing human semantic learning on the images in the training data set by using the deep neural network to obtain a classifier with optimal semantic segmentation robustness. Through the classifier trained by the deep neural network, the graphs can be accurately classified.
In addition, in step c, the position includes: human body shape outline, human body 2D frame, human body 3D frame, human body different part shape outline, human body different part 2D frame.
Step d: and c, converting the position information obtained in the step c from an image coordinate system to a world coordinate system through a projection transformation matrix to obtain the corresponding position of the target human body or the target human body part on the XY plane. In this step, the world coordinate system is established as follows: taking a vertical projection point of a vehicle central point on the ground as an original point, making the vertical projection point parallel to a vehicle longitudinal axis through the original point as an X axis, making the vertical projection point parallel to the vehicle longitudinal axis through the original point as a vehicle advancing direction, making the vertical projection point parallel to a vehicle transverse axis through the original point as a Y axis, making the vertical projection point parallel to the vehicle transverse axis through the original point as a right direction, and making the vertical projection point parallel to the vehicle transverse axis through the original point as a Z axis through the original point; and calibrating the internal reference and external reference coefficients of each fisheye lens. And calculating to obtain a projection transformation matrix from the image coordinate system to the world coordinate system according to the internal reference coefficient and the external reference coefficient of the fisheye lens.
Step e: and continuously and dynamically tracking the target human body or the target human body part by a pedestrian tracking method until the target human body or the target human body part leaves a monitoring range, and calculating Euclidean distance between the target contour coordinate and/or the 2D frame coordinate and the center point of the vehicle in a world coordinate system during tracking, the vertical distance in the X-axis direction, the vertical distance in the Y-axis direction and the relative speed and acceleration in the X-axis/Y-axis direction.
In step e, the euclidean distance is calculated as follows:
step e1: searching a coordinate pI of a point on the contour or the 2D frame closest to the X axis under the image coordinate system;
step e2: converting the point coordinate into pW under a world coordinate system through projection transformation;
step e3: calculating the absolute value of the difference value between the x and y values of the pW point and the x and y values of the central coordinate point of the vehicle;
step e4: the square of the sum of the squares of the absolute x and absolute y values is calculated.
In addition, the X-axis direction vertical distance is calculated by: starting from the Euclidean distance step e3, calculating the absolute value of the difference value from the X value of the pW point to the X axis of the world coordinate system,
the Y-axis direction vertical distance is calculated by the following method: starting from the Euclidean distance step e3, calculating the absolute value of the difference value from the Y value of the pW point to the Y axis of the world coordinate system,
the relative speed and the acceleration in the X-axis/Y-axis direction are calculated by the following method: vx is the ratio of the vertical distance in the X-axis direction to the time t used for specifying the frame, vy is the ratio of the vertical distance in the Y-axis direction to the time t used for specifying the frame, ax is the ratio of Vx to the time t used for specifying the frame, and Ay is the ratio of Vy to the time t used for specifying the frame.
And finally, step f: and e, evaluating the early warning grade according to the data obtained by calculation in the step e. And f, evaluating the early warning level in two conditions. Firstly, the highest-level early warning is given to all human body targets detected within one meter around the vehicle body of the vehicle no matter the current vehicle speed of the vehicle is. Secondly, when the speed of the vehicle is larger than zero, for the detected human body targets which are not around one meter of the vehicle body, calculating the collision time Tx and Ty in the X direction and the Y direction according to the values obtained in the step e:
evaluating an early warning grade according to the minimum value of Tx and Ty, wherein the shorter the time is, the higher the early warning grade is; where the minimum value refers to a non-zero value.
When the distance from the target to the X axis or the Y axis in the designated frame is always unchanged or the change range is very small, no early warning is given, and the collision time is infinite; this situation indicates that the pedestrian is traveling in a plane parallel to the vehicle body, and there is no risk of collision between the pedestrian and the host vehicle.
When the distance from the target to the X axis or the Y axis in the designated frame shows a trend of gradually getting away (linearly or non-linearly changing), the collision time is increased. This situation indicates that the pedestrian is far from the vehicle and the risk of collision between the pedestrian and the host vehicle does not occur.
As shown in fig. 2 and 3, the area indicated by the dotted line is the driver visual range, and the area indicated by the two-dot chain line is the camera visual range, it can be found that if an object (such as a dog, a child or a crouched human body) is just under a window or within about 6 meters in front of a vehicle head, the object may not be found by the driver, and the problem of the visual blind area can be solved well by the fisheye-around imaging head of the present invention.
Fig. 3 is a top view of a visible area of an embodiment of the present invention, in which when only a part of the body (for example, the feet) of a pedestrian enters the monitoring range, the pedestrian is identified and tracked, and the warning manner is changed according to the warning level until the pedestrian completely leaves the monitoring range of the vehicle.
The invention not only has low price of used equipment and supports mass production, but also can accurately calculate the linear distance, the relative speed and the relative acceleration from pedestrians (including all parts of the target human body) in various forms to the vehicle body aiming at the omnibearing visual angle of the vehicle body under the condition of low cost, thereby solving the problem that the pedestrians and the human bodies with heavy distortion under the fisheye lens are difficult to identify and track.
It will be apparent to those skilled in the art that the above embodiments are merely illustrative of the present invention and are not to be construed as limiting the present invention, and that changes and modifications to the above described embodiments may be made within the spirit and scope of the present invention as defined in the appended claims.

Claims (10)

1. An omnibearing pedestrian collision early warning method based on a fisheye lens is characterized by comprising the following steps:
step a: a plurality of fisheye lenses with specified angles are arranged around the vehicle, and the visual angle ranges of the adjacent fisheye lenses are at least partially overlapped;
step b: acquiring pedestrian images around the vehicle in real time through the plurality of fisheye lenses;
step c: calculating human body semantic segmentation on the image acquired in real time by using a trained classifier to obtain the positions of segmentation areas of different parts of the same human body in an image coordinate system;
step d: c, converting the position information obtained in the step c from an image coordinate system to a world coordinate system through a projection transformation matrix to obtain the corresponding position of the target human body or the target human body part on an XY plane;
step e: continuously and dynamically tracking the target human body or the target human body part by a pedestrian tracking method until the target human body or the target human body part leaves a monitoring range, and calculating Euclidean distance between target contour coordinates and/or 2D frame coordinates and the center point of the vehicle in a world coordinate system during tracking, vertical distance in the X-axis direction, vertical distance in the Y-axis direction and relative speed and acceleration in the X-axis/Y-axis direction; and
step f: and e, evaluating the early warning grade according to the data obtained by calculation in the step e.
2. The method of claim 1, wherein in step a, four fisheye lenses are installed at the middle of the front and rear bumpers and under the left and right reflectors of the vehicle, preferably, the fisheye lenses are capable of observing the ground closely attached to the vehicle body.
3. The method according to claim 1, wherein in step c, the human body is divided into an upper body or a lower body, or further divided into a head, an upper body, upper limbs, a lower body and lower limbs, or further divided into a head, hands, feet, upper limbs, lower limbs, an upper body and a lower body according to actual needs.
4. The method of claim 1, wherein in step c, the classifier is trained as follows:
using a vehicle provided with a plurality of fisheye lenses to collect a large number of images around the vehicle as a training data set, wherein the images around the vehicle comprise high-density pedestrian crowd images, ultra-close-range human body images, special human body posture images and incomplete human body images;
and performing human semantic learning on the images in the training data set by using the deep neural network to obtain a classifier with optimal semantic segmentation robustness.
5. The method of claim 1, wherein in step c, the location comprises: human body shape outline, human body 2D frame, human body 3D frame, human body different part shape outline, human body different part 2D frame.
6. The method according to claim 4, wherein, in step e,
the euclidean distance is calculated as follows:
step e1: searching a coordinate pI of a point on the contour or the 2D frame closest to the X axis under the image coordinate system;
step e2: converting the point coordinate into pW under a world coordinate system through projection transformation;
step e3: calculating the absolute value of the difference value between the x and y values of the pW point and the x and y values of the central coordinate point of the vehicle;
step e4: the square of the sum of the squares of the absolute x and absolute y is calculated,
the X-axis direction vertical distance is calculated by the following method: starting from the Euclidean distance step e3, calculating the absolute value of the difference value between the X value of the pW point and the X axis of the world coordinate system,
the Y-axis direction vertical distance is calculated by the following method: starting from the Euclidean distance step e3, calculating the absolute value of the difference value between the Y value of the pW point and the Y axis of the world coordinate system,
the relative speed and the acceleration in the X-axis/Y-axis direction are calculated by the following method: vx is the ratio of the vertical distance in the X-axis direction to the time t used for specifying the frame, vy is the ratio of the vertical distance in the Y-axis direction to the time t used for specifying the frame, ax is the ratio of Vx to the time t used for specifying the frame, and Ay is the ratio of Vy to the time t used for specifying the frame.
7. The method according to claim 1, wherein in step d, the world coordinate system is established as follows: taking a vertical projection point of a vehicle central point on the ground as an original point, making the vertical projection point parallel to a vehicle longitudinal axis through the original point as an X axis, making the vertical projection point parallel to the vehicle longitudinal axis through the original point as a vehicle advancing direction, making the vertical projection point parallel to a vehicle transverse axis through the original point as a Y axis, making the vertical projection point parallel to the vehicle transverse axis through the original point as a right direction, and making the vertical projection point parallel to the vehicle transverse axis through the original point as a Z axis through the original point; and calibrating the external reference coefficient of the internal reference coefficient of each fisheye lens.
8. The method of claim 1, wherein the projective transformation matrix from the image coordinate system to the world coordinate system is calculated according to the internal reference coefficient and the external reference coefficient of the fisheye lens.
9. The method of claim 8 wherein in step f, the highest level warning is given to all human targets detected within one meter around the body of the host vehicle, regardless of the current speed of the host vehicle.
10. The method according to claim 1, wherein in step f, when the vehicle speed is greater than zero, for the detected human target around the vehicle body by more than one meter, the collision times Tx and Ty in the X direction and the Y direction are calculated according to the values obtained in step e:
evaluating the early warning grade according to the minimum value of Tx and Ty, wherein the shorter the time is, the higher the early warning grade is;
when the distance from the target to the X axis or the Y axis in the designated frame is always unchanged or the change range is very small, the target is not warned,
the collision time is 0;
when the distance from the target to the X axis or the Y axis in the designated frame shows a trend of gradually getting away from the target, the target is not warned, and the collision time is increased.
CN201910487362.0A 2019-06-05 2019-06-05 Omnidirectional pedestrian collision early warning method based on fisheye lens Active CN110378202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910487362.0A CN110378202B (en) 2019-06-05 2019-06-05 Omnidirectional pedestrian collision early warning method based on fisheye lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910487362.0A CN110378202B (en) 2019-06-05 2019-06-05 Omnidirectional pedestrian collision early warning method based on fisheye lens

Publications (2)

Publication Number Publication Date
CN110378202A CN110378202A (en) 2019-10-25
CN110378202B true CN110378202B (en) 2022-12-09

Family

ID=68249835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910487362.0A Active CN110378202B (en) 2019-06-05 2019-06-05 Omnidirectional pedestrian collision early warning method based on fisheye lens

Country Status (1)

Country Link
CN (1) CN110378202B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582080B (en) * 2020-04-24 2023-08-08 杭州鸿泉物联网技术股份有限公司 Method and device for realizing 360-degree looking-around monitoring of vehicle
CN112001229B (en) * 2020-07-09 2021-07-20 浙江大华技术股份有限公司 Method, device and system for identifying video behaviors and computer equipment
CN112249021B (en) * 2020-10-14 2022-06-24 广州汽车集团股份有限公司 Road pedestrian collision risk prediction method and system
CN112257692B (en) * 2020-12-22 2021-03-12 湖北亿咖通科技有限公司 Pedestrian target detection method, electronic device and storage medium
CN113052009A (en) * 2021-03-02 2021-06-29 北京经纬恒润科技股份有限公司 Vehicle anti-collision method, model training method and device
CN113033441B (en) * 2021-03-31 2024-05-10 广州敏视数码科技有限公司 Pedestrian collision early warning method based on wide-angle imaging
CN113705403B (en) * 2021-08-18 2023-08-08 广州敏视数码科技有限公司 Front target vehicle collision early warning method fused with panoramic imaging system
CN114648639B (en) * 2022-05-19 2022-08-23 魔视智能科技(武汉)有限公司 Target vehicle detection method, system and device
CN117275102A (en) * 2023-10-13 2023-12-22 佛山佳立电气科技有限公司 Stereo garage offline charging system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000115A1 (en) * 2015-06-29 2017-01-05 北京旷视科技有限公司 Person re-identification method and device
CN108638999A (en) * 2018-05-16 2018-10-12 浙江零跑科技有限公司 A kind of collision early warning system and method for looking around input based on 360 degree

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000115A1 (en) * 2015-06-29 2017-01-05 北京旷视科技有限公司 Person re-identification method and device
CN108638999A (en) * 2018-05-16 2018-10-12 浙江零跑科技有限公司 A kind of collision early warning system and method for looking around input based on 360 degree

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于全景相机的3D坐标测量方法;王启超等;《电子测量技术》;20171231(第12期);全文 *

Also Published As

Publication number Publication date
CN110378202A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110378202B (en) Omnidirectional pedestrian collision early warning method based on fisheye lens
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
CN110244322B (en) Multi-source sensor-based environmental perception system and method for pavement construction robot
JP7157054B2 (en) Vehicle navigation based on aligned images and LIDAR information
US10696227B2 (en) Determining a road surface characteristic
CN106228110B (en) A kind of barrier and drivable region detection method based on vehicle-mounted binocular camera
CN110065494B (en) Vehicle anti-collision method based on wheel detection
CN103176185B (en) Method and system for detecting road barrier
US10984557B2 (en) Camera calibration using traffic sign recognition
CN109084724A (en) A kind of deep learning barrier distance measuring method based on binocular vision
CN108596058A (en) Running disorder object distance measuring method based on computer vision
CN102194239B (en) For the treatment of the method and system of view data
EP2958054A2 (en) Hazard detection in a scene with moving shadows
CN111712731A (en) Target detection method and system and movable platform
CN106054174A (en) Fusion method for cross traffic application using radars and camera
JP4956453B2 (en) Object detection device
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
JP6758160B2 (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
US10554951B2 (en) Method and apparatus for the autocalibration of a vehicle camera system
CN109827516B (en) Method for measuring distance through wheel
CN112597839B (en) Road boundary detection method based on vehicle-mounted millimeter wave radar
Raguraman et al. Intelligent drivable area detection system using camera and LiDAR sensor for autonomous vehicle
JP2023184572A (en) Electronic apparatus, movable body, imaging apparatus, and control method for electronic apparatus, program, and storage medium
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
Li et al. Composition and application of current advanced driving assistance system: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant