CN108021849B - Pedestrian early warning method and device - Google Patents

Pedestrian early warning method and device Download PDF

Info

Publication number
CN108021849B
CN108021849B CN201610958798.XA CN201610958798A CN108021849B CN 108021849 B CN108021849 B CN 108021849B CN 201610958798 A CN201610958798 A CN 201610958798A CN 108021849 B CN108021849 B CN 108021849B
Authority
CN
China
Prior art keywords
pedestrian
target
area
image
road image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610958798.XA
Other languages
Chinese (zh)
Other versions
CN108021849A (en
Inventor
陈晓
童俊艳
任烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610958798.XA priority Critical patent/CN108021849B/en
Publication of CN108021849A publication Critical patent/CN108021849A/en
Application granted granted Critical
Publication of CN108021849B publication Critical patent/CN108021849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a pedestrian early warning method and device, which are applied to vehicles provided with image acquisition equipment. The embodiment of the application relates to the technical field of intelligent traffic. The pedestrian early warning method comprises the following steps: acquiring a road image which is acquired by image acquisition equipment and corresponds to the driving direction of the vehicle; obtaining target equipment parameters of the image acquisition equipment; determining a target virtual lane area fixedly connected with the vehicle in the road image according to the target equipment parameters; detecting a target pedestrian region in the road image; and early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area. By applying the technical scheme provided by the embodiment of the application, the accuracy of pedestrian early warning can be improved.

Description

Pedestrian early warning method and device
Technical Field
The application relates to the technical field of intelligent traffic, in particular to a pedestrian early warning method and device.
Background
Along with the improvement of living standard of people, the use frequency of vehicles in the life of people is higher and higher. The condition that traffic accidents are caused by the collision of vehicles and pedestrians frequently occurs, and great threat is brought to the life and property safety of people. Pedestrian early warning is a key technology in a driving auxiliary driving system, and can remind a driver of the existence of a pedestrian in front of a vehicle in time so that the driver can adopt a safe driving mode to avoid traffic accidents as much as possible.
In the prior art, a driving auxiliary driving system carries out early warning on dangerous targets in front in the driving process of a vehicle through a video processing technology. The method comprises the specific process that in the driving process of a vehicle, a driving auxiliary driving system on the vehicle acquires a road image in front of the vehicle, which is shot by a camera, detects pedestrians and lane lines in the road image, and carries out early warning according to the position relation between the pedestrians and the lane lines.
However, in actual use, the vehicle does not travel in the range and direction strictly defined by the lane line, and may deviate from the lane area. At this time, the pedestrian warning method cannot accurately warn.
Disclosure of Invention
The embodiment of the application aims to provide a pedestrian early warning method and device so as to improve accuracy of pedestrian early warning.
In order to achieve the above object, the present application discloses a pedestrian warning method applied to a vehicle equipped with an image capturing device, the method comprising:
acquiring a road image which is acquired by the image acquisition equipment and corresponds to the driving direction of the vehicle;
obtaining target equipment parameters of the image acquisition equipment;
determining a target virtual lane area fixedly connected with the vehicle in the road image according to the target equipment parameters;
detecting a target pedestrian region in the road image;
and early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area.
Optionally, the determining, according to the target device parameter, a target virtual lane area that is fixedly connected to the vehicle in the road image includes:
determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the target equipment parameters and a preset alarm distance value and according to an imaging principle; the target virtual lane area is an area fixedly connected with the vehicle in the road image;
and determining the horizontal left and right boundary line positions of the target virtual lane area according to the target equipment parameters, the preset width and the alarm distance value and according to an imaging principle.
Optionally, the determining, according to the target device parameter and a preset alarm distance value and according to an imaging principle, the longitudinal upper and lower boundary positions of the target virtual lane area includes:
the longitudinal upper boundary position py of the target virtual lane area is determined according to the following formulaOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition device comprises a road image acquisition device, a light source and a light source, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and y1 is a preset alarm distance value.
Optionally, when the image capturing device is installed on a central plane of the vehicle in the left-right direction, determining the horizontal left-right boundary position of the target virtual lane area according to the imaging principle according to the target device parameter, the preset width and the alarm distance value includes:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]The method includes the steps that im _ w is the number of transverse total pixels of the road image on an imaging plane, im _ H is the number of longitudinal total pixels of the road image on the imaging plane, s is the pixel size of the image acquisition equipment, f is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, lane _ w is a preset width, H is the installation height of the image acquisition equipment, and y1 is the alarm distance value.
Optionally, the detecting a target pedestrian region in the road image includes:
detecting a suspected pedestrian area in the road image;
calculating an actual height value corresponding to the suspected pedestrian area according to the target equipment parameters and an imaging principle;
judging whether the actual height value is in [ h ]0/Th,h0*Th]In the range, wherein h0The Th is a preset height threshold value, and the Th is a preset value;
if yes, determining a target pedestrian area in the road image according to the suspected pedestrian area.
Optionally, the calculating an actual height value corresponding to the suspected pedestrian area according to the target device parameter and the imaging principle includes:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]) The image acquisition device comprises a road image acquisition device, a suspected pedestrian area, a camera module and a camera module, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and a horizontal plane, H is the installation height of the image acquisition device, and ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area respectively.
Optionally, when the road image is an infrared image, the detecting a target pedestrian area in the road image includes:
and inputting the road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-collected sample infrared image according to a preset machine learning algorithm.
Optionally, when the road image is an infrared image, the detecting a target pedestrian area in the road image includes:
detecting a suspected pedestrian area in the road image;
calculating the gray level distribution uniformity value of the suspected pedestrian area;
judging whether the gray distribution uniformity value is larger than a preset uniformity value or not;
and if so, determining the suspected pedestrian area as a target pedestrian area.
Optionally, when the road image is an infrared image, the detecting a target pedestrian area in the road image includes:
detecting a suspected pedestrian area in the road image;
obtaining the gray average value of the suspected pedestrian area;
judging whether the gray average value is larger than a preset gray threshold value or not;
and if so, determining the suspected pedestrian area as a target pedestrian area.
In order to achieve the above object, the present application discloses a pedestrian early warning device is applied to the vehicle of installing image acquisition equipment, the device includes:
the image acquisition module is used for acquiring a road image which is acquired by the image acquisition equipment and corresponds to the driving direction of the vehicle;
the parameter obtaining module is used for obtaining target equipment parameters of the image acquisition equipment;
the region determining module is used for determining a target virtual lane region fixedly connected with the vehicle in the road image according to the target equipment parameter;
the pedestrian detection module is used for detecting a target pedestrian area in the road image;
and the pedestrian early warning module is used for early warning according to the position relation between the target pedestrian area and the target virtual lane area.
Optionally, the area determining module includes:
the upper and lower boundary determining submodule is used for determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the target equipment parameters and a preset alarm distance value and according to an imaging principle; the target virtual lane area is an area fixedly connected with the vehicle in the road image;
and the left and right boundary determining submodule is used for determining the positions of the transverse left and right boundary lines of the target virtual lane area according to the target equipment parameters, the preset width and the alarm distance value and according to an imaging principle.
Optionally, the upper and lower boundary determining sub-module is specifically configured to:
the longitudinal upper boundary position py of the target virtual lane area is determined according to the following formulaOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition device comprises a road image acquisition device, a light source and a light source, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and y1 is a preset alarm distance value.
Optionally, when the image capturing device is installed on a central plane of the vehicle in the left-right direction, the left-right boundary determining submodule is specifically configured to:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]The method includes the steps that im _ w is the number of transverse total pixels of the road image on an imaging plane, im _ H is the number of longitudinal total pixels of the road image on the imaging plane, s is the pixel size of the image acquisition equipment, f is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, lane _ w is a preset width, H is the installation height of the image acquisition equipment, and y1 is the alarm distance value.
Optionally, the pedestrian detection module includes:
the first detection submodule is used for detecting a suspected pedestrian area in the road image;
the first calculation submodule is used for calculating an actual height value corresponding to the suspected pedestrian area according to the target equipment parameters and an imaging principle;
a first judgment submodule for judging whether the actual height value is at [ h ]0/Th,h0*Th]In the range, wherein h0The Th is a preset height threshold value, and the Th is a preset value;
a first determination submodule for determining when said actual height value is at [ h ]0/Th,h0*Th]And when the area is within the range, determining a target pedestrian area in the road image according to the suspected pedestrian area.
Optionally, the calculation submodule is specifically configured to:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]) The im _ H is the longitudinal total pixel number of the road image on an imaging plane, the s is the pixel size of the image acquisition equipment, the f is the focal length of an optical element in the image acquisition equipment, the theta is the included angle between the optical axis of the optical element and the horizontal plane, and the H is the installation height of the image acquisition equipmentAnd the ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area respectively.
Optionally, when the road image is an infrared image, the pedestrian detection module is specifically configured to:
and inputting the road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-collected sample infrared image according to a preset machine learning algorithm.
Optionally, when the road image is an infrared image, the pedestrian detection module includes:
the second detection submodule is used for detecting a suspected pedestrian area in the road image;
the second calculation submodule is used for calculating the gray level distribution uniformity value of the suspected pedestrian area;
the second judgment submodule is used for judging whether the gray distribution uniformity value is larger than a preset uniformity value or not;
and the second determining submodule is used for determining the suspected pedestrian area as a target pedestrian area when the gray distribution uniformity value is larger than a preset uniformity value.
Optionally, when the road image is an infrared image, the pedestrian detection module includes:
the third detection submodule is used for detecting a suspected pedestrian area in the road image;
an obtaining submodule for obtaining a gray average value of the suspected pedestrian area;
the third judgment submodule is used for judging whether the gray average value is larger than a preset gray threshold value or not;
and the third determining submodule is used for determining the suspected pedestrian area as a target pedestrian area when the gray average value is larger than a preset gray threshold value.
As can be seen from the above technical solutions, in the embodiments of the present application, first, a vehicle as an execution subject obtains a road image corresponding to a driving direction of the vehicle and target device parameters of an image capturing device installed in the vehicle. And then, according to the target equipment parameters, determining a target virtual lane area fixedly connected with the vehicle in the road image, and detecting a target pedestrian area in the road image. And finally, early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area.
That is to say, the embodiment of the application mainly determines the virtual lane area according to the device parameters of the image capturing device, and the virtual lane area is not the actual lane area in the road but a virtual lane area fixedly connected with the vehicle. In this way, even if the vehicle does not travel in the range and direction defined by the actual lane line in the road, the relative position of the pedestrian and the vehicle can be accurately known, so that the accuracy of pedestrian warning can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a pedestrian warning method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a target virtual lane area in a road image;
FIG. 3 is a schematic flow chart of step S103 in FIG. 1;
FIG. 4a is a schematic diagram of image parameters and virtual lanes in a road image;
FIG. 4b is a schematic diagram illustrating a principle of determining upper and lower boundary lines of a target virtual lane;
FIG. 4c is a schematic diagram illustrating a schematic diagram of determining left and right boundary lines of a target virtual lane;
FIG. 4d is a schematic diagram of a principle of determining the actual height of a suspected pedestrian area;
fig. 5a and 5b are schematic views of a target virtual lane area determined when the vehicle turns.
Fig. 6 is a schematic structural diagram of a pedestrian warning apparatus according to an embodiment of the present disclosure;
fig. 7 is another schematic structural diagram of a pedestrian warning device provided in the embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a pedestrian early warning method and device, and accuracy of pedestrian early warning is improved.
The present application will be described in detail below with reference to specific examples.
Fig. 1 is a schematic flow chart of a pedestrian warning method provided in an embodiment of the present application, and is applied to a vehicle equipped with an image capturing device. Specifically, the pedestrian early warning method can be applied to a driving assistant driving system installed in a vehicle. The image acquisition device may be a vehicle event data recorder or other devices. The image acquired by the image acquisition device may be an infrared image or a non-infrared image, for example, the non-infrared image may be an RGB (red, green, and blue channel) image.
Specifically, the method comprises the following steps:
step S101: a road image corresponding to a driving direction of the vehicle is obtained. Wherein the road image is acquired by an image acquisition device.
It should be noted that, when the image capturing device is installed, the shooting direction of the image capturing device may be oriented to the driving direction of the vehicle, so that the road image captured by the image capturing device is an image in the driving direction of the vehicle. The driving direction of the vehicle may be toward the front of the vehicle, may be toward the rear of the vehicle, or may be a direction turning to the left or right of the vehicle.
Step S102: and obtaining target equipment parameters of the image acquisition equipment.
Wherein, the target device parameters of the image acquisition device may include: the image element size s of the image acquisition equipment, the focal length f of an optical element in the image acquisition equipment, the included angle theta between the optical axis of the optical element and the horizontal plane (the included angle can also be approximately replaced by the included angle between the image acquisition equipment and the horizontal plane), the installation height H of the image acquisition equipment and the like.
Step S103: and determining a target virtual lane area fixedly connected with the vehicle in the road image according to the target equipment parameters.
It will be appreciated that the imaging process of the road image is associated with target device parameters. The target virtual lane area fixedly connected with the vehicle in the road image is not the actual lane area but the lane area which is virtualized in the image. The target virtual lane area is fixed to the vehicle, which means that when the traveling direction of the vehicle changes, the target virtual lane area also changes with the traveling direction of the vehicle.
Fig. 2 is a schematic diagram of a virtual lane area in a road image. Among them, one or more than two of the regions 1 to 5 may be set as the target virtual lane region. The area 1, the area 2, and the area 3 are virtual lane areas of a center area in the traveling direction of the vehicle, and the area 1 is closest to the vehicle and the area 3 is farthest from the vehicle. The regions 5 and 6 are virtual lane regions on both sides of the center region in the vehicle traveling direction.
Specifically, when the target virtual lane area fixedly connected with the vehicle in the road image is determined according to the target device parameter, the target virtual lane area fixedly connected with the vehicle in the road image can be determined according to the target device parameter, a preset alarm distance value and a preset width according to an imaging principle.
Step S104: detecting a target pedestrian region in the road image.
When the road image is shot by the image acquisition device under the condition of sufficient illumination, for example, when the road image is a non-infrared image, a target pedestrian area in the road image can be detected according to a pedestrian detection method in the prior art, and the specific process is not repeated.
When the road image is captured by the image capture device under the condition of insufficient illumination, for example, when the road image is an infrared image, and the target pedestrian area in the road image is detected, the method may include: inputting a road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-acquired sample infrared image according to a preset machine learning algorithm.
Specifically, when the infrared image detection model is trained, an infrared image containing a pedestrian sample may be collected in advance as a positive sample, an infrared image not containing a pedestrian sample may be collected as a negative sample, a pedestrian target in the positive sample is marked, and a non-pedestrian target in the negative sample is marked. And then, training a preset machine learning algorithm by adopting the positive sample and the negative sample to obtain an infrared image detection model. The posture of the pedestrian in the positive sample may include a forward posture, a backward posture, a sideways posture, a walking posture, a still posture, and the like.
It should be noted that the more the types of the acquired sample infrared images are, the larger the number of the samples is, and the better the robustness (or compatibility) of the trained model is.
When detecting a target pedestrian region in a road image based on a pre-generated infrared image detection model, the road image may be input as input information to the infrared image detection model, and the road image may be detected by the infrared image detection model. Wherein, the output result of the infrared image detection model may include: whether a pedestrian is included in the road image, and an area where the pedestrian is located when the pedestrian is included.
As a specific embodiment, the machine learning algorithm may be a Direct Part Marking (DPM) detection algorithm. Of course, other algorithms may be used to train the model, which is not specifically limited in this embodiment.
Meanwhile, after a target pedestrian area with high confidence level is detected, a multi-target tracking algorithm can be used for tracking the detection result, and the continuity of target output is improved.
Step S105: and early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area.
Specifically, when performing early warning according to the position relationship between the target pedestrian area and the target virtual lane area, the method may include: and judging whether the target pedestrian area is in the range of the target virtual lane area, if so, early warning, and otherwise, not processing.
As a specific implementation manner, when two or more target virtual lane areas exist, when performing early warning according to a position relationship between a target pedestrian area and the target virtual lane area, the early warning may include:
determining a target virtual lane area where the target pedestrian area is located, determining a first early warning level corresponding to the target virtual lane area where the target pedestrian area is located according to a pre-stored corresponding relation between the target virtual lane area and the early warning level, and then early warning according to the first early warning level.
When the target pedestrian area is not within the range of the target virtual lane area, that is, when the target pedestrian area is not within the range of any one of the target virtual lane areas, no processing is performed.
As an embodiment, when the early warning level is set, the early warning level may be set according to a position relationship between the target virtual lane area and the vehicle and a preset rule, where the specific preset rule may include: the closer the vehicle is to the target virtual lane area in front of the vehicle, the higher the corresponding early warning level.
For example, in the road image shown in fig. 2, each of the regions 1 to 5 is a target virtual lane region. The region 1 is closest to the vehicle, the region 2 times, and the region 3 is farthest from the vehicle, so the early warning levels of the three regions can be sequentially reduced. The areas 4 and 5 are located on both sides of the vehicle, respectively, and therefore the warning level thereof can be set lower as appropriate. The pre-warning levels of the areas are set as follows:
region 1 → early warning level one level (highest level); region 2 → early warning level two
Region 3 → early warning level three; region 4 → early warning level four (lowest level);
region 5 → early warning level four stages
As can be seen from fig. 2, pedestrians are detected in both the area 1 and the area 2, and thus, the early warning level can be determined to be one level and two levels.
It should be noted that, when the target pedestrian area is located in at least two target virtual lane areas, the early warning can be performed according to the early warning level corresponding to the area with the highest early warning level, so as to ensure the driving safety of the vehicle as much as possible.
As is apparent from the above, in the present embodiment, first, the vehicle as the execution subject obtains the road image corresponding to the traveling direction of the vehicle, and the target device parameters of the image capturing device mounted in the vehicle. And then, according to the target equipment parameters, determining a target virtual lane area fixedly connected with the vehicle in the road image, and detecting a target pedestrian area in the road image. And finally, early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area.
That is to say, the present embodiment mainly determines a virtual lane area according to the device parameters of the image capturing device, where the virtual lane area is not a real lane area in a road but a virtual lane area fixedly connected to a vehicle. In this way, even if the vehicle does not travel in the range and direction defined by the actual lane line in the road, the relative position of the pedestrian and the vehicle can be accurately known, so that the accuracy of pedestrian warning can be improved.
In another implementation manner based on the embodiment shown in fig. 1, after step S105, according to the target virtual lane area and the early warning result, a target frame with a corresponding color is displayed at a corresponding position on a display screen of the driving assistance system, and a sound alarm with a corresponding level is issued through a buzzer, so that the driver can know the position of the pedestrian more clearly and take more accurate safety operation.
In another implementation manner based on the embodiment shown in fig. 1, step S103, namely determining a target virtual lane area fixedly connected to a vehicle in a road image according to a target device parameter, may be performed according to the flowchart shown in fig. 3, and specifically includes the following steps:
step S103A: and determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the target equipment parameters and the preset alarm distance value and the imaging principle.
And the target virtual lane area is an area fixedly connected with the vehicle in the road image.
It should be noted that, in this embodiment, generally, virtual lane lines on both sides of a target virtual lane area, which is fixedly connected to a vehicle in the vehicle traveling direction, in real space are arranged as parallel lines, and therefore, such a target virtual lane area corresponds to a trapezoidal area in a road image, that is, positions of four vertexes of a trapezoid are determined, that is, the target virtual lane area is determined.
When the road image is obtained, image parameters of the road image on an imaging plane of the image acquisition equipment can be obtained, wherein the image parameters comprise the longitudinal total pixel number im _ h and the transverse total pixel number im _ w. Fig. 4a is a schematic diagram of image parameters and a target virtual lane area in a road image, wherein the number of pixels in the horizontal direction and the vertical direction is marked, and the lower left corner of the image is a 0 position of the number of pixels. Lines 1, 2, 3 and 4 in the figure are the upper, lower, left and right boundaries of the target virtual lane area, respectively. For convenience of description below, the upper, lower, left, and right positions of the image, and two vertexes P1 and P2 of the target virtual lane area are also indicated in fig. 4a, and P0 is a point on the same abscissa as the P1 and located at the left edge of the image.
Specifically, in this embodiment, step S103A may include the following implementation manners:
the longitudinal upper boundary position py of the target virtual lane area is determined according to the following formulaOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition equipment comprises image acquisition equipment, image acquisition equipment and a controller, wherein im _ h is the longitudinal total pixel number of a road image on an imaging plane, s is the pixel size of the image acquisition equipment, and im _ h is the height of the road image on the imaging plane. F is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition equipment, y1 is a preset alarm distance value, and y1 represents the horizontal distance from an alarm distance point in front of the vehicle to the image acquisition equipment. For example, y1 may take the value of 10m or other values.
It should be noted that im _ h may also be referred to as a target surface height of the image capturing device.
The derivation of the above equation is as follows.
Referring to the schematic diagram of fig. 4b, the imaging plane, the horizontal plane and the horizon are marked, and the up and down directions of the road image on the imaging plane are indicated, and the driving direction of the vehicle is shown to the right along the y-axis. The image acquisition device is located at point O, the projected point of point O on the ground plane is point O', and the optical axes of the optical elements are shown in fig. 4 b. The points P1 and P2 correspond to points P1 and P2 in fig. 4a, respectively, the longitudinal distance from the point P1 to the center point of the road image is py1, and the optical axis of the optical element passes through the center point of the road image. P1 corresponds to point L1 on the actual ground, P2 corresponds to point L2 on the actual ground, and point L2 is the closest imaging position. The distance from the point L1 to the point O' in the y-axis direction is the alarm distance value y 1.
As can be seen from FIG. 4b, the longitudinal position of point P2 is py Lower part0. The longitudinal position of point P1 is pyOn the upper part. Calculate py as followsOn the upper partFrom the triangular relationship in FIG. 4b, py is knownOn the upper partIm _ H/2 + py1, where py1 ═ f × tan α, α ═ θ - β, β ═ tan (H/y1), so py1 ═ f tan [ θ -arctan (H/y1)],pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)]。
If the height of the road image is normalized to 0-1, pyOn the upper part=0.5+f*tan[θ-arctan(H/y1)]/im_h*s。
Step S103B: and determining the transverse left and right boundary line positions of the target virtual lane area according to the target equipment parameters, the preset width and the alarm distance value and according to the imaging principle.
The preset width represents a width of the virtual lane area in the real space, and may be a preset value, for example, a preset width value such as 2m or 2.1m may be set.
Fig. 4a is a target virtual lane area obtained when the image pickup apparatus is mounted on a center plane in the left-right direction of the vehicle. The central plane is the central plane between the planes of the two front wheels of the vehicle. When the image capturing device is not mounted on the center plane, but is mounted at a distance μ from the left or right side of the center plane, the target virtual lane area of the trapezoid in fig. 4a may be shifted to the left or right side by μ accordingly.
Specifically, for the sake of simplicity, an implementation of step S103B in the present embodiment is given below by taking as an example that the image pickup apparatus is mounted on a center plane in the left-right direction of the vehicle:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]Y2 represents the closest imaging distance (being the actual distance on the ground), im _ w s represents the width of the road image on the imaging plane, im _ w is the number of transverse total pixels of the road image on the imaging plane, im _ h is the number of longitudinal total pixels of the road image on the imaging plane, s is the image acquisition settingThe pixel size of the image acquisition equipment is represented by f, the focal length of an optical element in the image acquisition equipment is represented by theta, the included angle between the optical axis of the optical element and the horizontal plane is represented by theta, the lane _ w is a preset width, the H is the installation height of the image acquisition equipment, and the y1 is the alarm distance value.
Note that the point px isLeft 1Point pxLeft 2And point pxRight 1Point pxRight 2The formula (2) is a normalized formula. The im _ w may also be referred to as the target surface width of the image capturing device.
The derivation of the above equation is as follows.
Referring to the schematic diagram of fig. 4c, the imaging plane is marked and the x and y axes form a world coordinate system located on the ground plane, the imaging plane and the x axis form a first coordinate system different from the world coordinate system, and the driving direction of the vehicle is to the right along the y axis. The image acquisition equipment is positioned at the point O, and the projection point of the point O on the ground plane is the point O'.
The calculation process is described below taking the x-coordinate of the imaging point P2 of the closest imaging distance L2 point in the road image as an example. As can be seen from fig. 4c, according to the triangular relationship, the distance y2 between the point L2 and the point O' in the world coordinate system is H/tan [ arctan (im _ H × s/2f) + θ f can be calculated]From the triangular relationship, one can obtain
Figure BDA0001142977560000141
The perpendicular distances between the O point and the x axis are d2 and d22=H2+y22. Meanwhile, in the first coordinate system, it can be calculated
Figure BDA0001142977560000142
According to the calculation result, the normalized coordinates of the left end point and the right end point of the virtual lane at the nearest imaging distance in the world coordinate system are pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)。
When the left end point and the right end point of the virtual lane at the alarm distance in the world coordinate system are calculated, y2 in the formula is replaced by y1, and the specific process is not described in detail.
When the vehicle turns, firstly, calculating x and y coordinates of each coordinate point at 100 distances according to a calculation method of a linear target virtual lane area; then, calculating the offset of the x coordinate of each coordinate point in the turning direction according to the turning angle of the vehicle; and finally, connecting 100 coordinate points to obtain a curve lane line. The angle at which the vehicle turns can be obtained from an angle measuring device mounted on the vehicle, which can be a gyroscope or the like.
The target virtual lane areas when the vehicle turns right and left are given in fig. 5a and 5b, respectively. As can be seen from the figure, the target virtual lane region when the vehicle turns is not a trapezoidal region but an irregular shape in which the boundary line is curved.
In another implementation manner based on the embodiment shown in fig. 1, in order to improve the accuracy in detecting the target pedestrian region, the step S104 of detecting the target pedestrian region in the road image may include:
step 1: and detecting a suspected pedestrian area in the road image.
The detected suspected pedestrian area may be a rectangular area in the road image, or may be an area of another shape. When the detected suspected pedestrian area is a rectangular area, the positions of four vertices of the rectangular area may be determined.
Step 2: and calculating an actual height value corresponding to the suspected pedestrian area according to the target equipment parameters and an imaging principle.
Specifically, when calculating the actual height value corresponding to the suspected pedestrian area according to the target device parameters and the imaging principle, the following implementation may be included:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]),yHuman beingThe actual position and place corresponding to the suspected pedestrian areaThe horizontal distance of the image acquisition equipment is described, im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition equipment, f is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition equipment, and ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area respectively.
The derivation of the above formula can be combined with the schematic diagram shown in FIG. 4d, and the longitudinal upper boundary position py of the target virtual lane area is calculated by referenceOn the upper partAnd lower boundary line position pyLower partThe process of (1). Ry in FIG. 4dHuman beingIs a height of hHuman beingOf the person in the road image, yHuman beingIs the distance between the person and the point O' in the actual road.
In this embodiment, when the actual height value corresponding to the suspected pedestrian area is calculated according to the target device parameter and the imaging principle, the actual height value corresponding to the position of the suspected pedestrian area may also be matched according to the actual height corresponding to each longitudinal length in the pre-stored road image. The actual height corresponding to each longitudinal length in the pre-stored road image can be obtained according to the parameters of the target device and the imaging principle. The specific process can be seen in the schematic diagram shown in fig. 4 d.
And step 3: judging whether the actual height value is in [ h ]0/Th,h0*Th]Within the range, if yes, the detection accuracy of the suspected pedestrian area is high, and the step 4 can be continuously executed. Otherwise, determining that the detection accuracy of the suspected pedestrian area is low, and discarding the suspected pedestrian area. Wherein, the h0The Th is a preset height threshold value.
In addition, h is0May be average height of the average person, e.g. h0May be taken to be 1.7 m. [ h ] of0/Th,h0*Th]The height range of the ordinary person is obtained. Th is a multiple and can be a value between 1.2 and 1.5.
And 4, step 4: and determining a target pedestrian area in the road image according to the suspected pedestrian area.
Specifically, when the target pedestrian area in the road image is determined according to the suspected pedestrian area, the suspected pedestrian area may be directly determined as the target pedestrian area in the road image.
Of course, in order to further improve the accuracy of the detected target pedestrian region, in this step, a screening process may be further performed according to the texture features of the image.
As can be seen, in the present embodiment, the suspected pedestrian area with a higher possibility is screened out as the target area according to the comparison between the actual height corresponding to the suspected pedestrian area and the preset height range, so that the detection accuracy can be improved.
In addition, when the illumination on the road is not sufficient, for example, at night, the target pedestrian area cannot be detected according to the common road image, and the pedestrian early warning cannot be performed. In this case, the road image may be collected by an infrared image collecting device, and the pedestrian region therein may be detected based on the characteristics of the infrared image.
It should be noted that all objects in nature always emit infrared radiation continuously as long as their temperature is above absolute zero. Thus, as long as these radiant energies are collected and detected, a thermal image corresponding to the temperature distribution of the object can be formed. This thermal image reproduces the differences in temperature and emissivity of the various parts of the object and thus can be used to characterize the object, forming a thermal image of visible light, i.e. an infrared image.
In the infrared image, the higher the temperature, the higher the gray level value of the region, and the temperature of the human body is generally higher than other objects around the region. Therefore, in the infrared image, the gray level of the human body part is obviously higher than that of other objects, and the human body part is brighter and the other object parts are darker when displayed on the picture. Fig. 2 is an example of an infrared image, and it is apparent from this figure that a part of a human body is brighter than other parts.
Therefore, step S104 in the embodiment shown in fig. 1 can be improved based on the above-described characteristics of the infrared image.
In one implementation of the embodiment shown in fig. 1, in order to improve the accuracy of detecting the target pedestrian region when the road image is an infrared image, step S104, namely detecting the target pedestrian region in the road image, may include the following steps:
step 1: and detecting a suspected pedestrian area in the road image.
Step 2: and obtaining the gray average value of the suspected pedestrian area.
And step 3: and judging whether the gray average value is larger than a preset gray threshold value or not, and if so, executing the step 4.
The gray threshold can be obtained by the following steps: obtaining a plurality of infrared images, marking human body areas in the infrared images, obtaining the gray value of each human body area, counting the average value of all the gray values, and determining the average value as a gray threshold.
In the embodiment of the application, a self-adaptive gray level threshold value can be used for distinguishing a human body from a non-human body, the gray level average value of the target with high confidence level is counted and taken as a segmentation threshold value, real-time updating is carried out, and the target with low confidence level, of which the gray level average value is smaller than the threshold value, is deleted.
And 4, step 4: and determining the suspected pedestrian area as a target pedestrian area.
Therefore, in the embodiment, the target pedestrian area is screened from the detected suspected pedestrian area according to the characteristic that the gray values of the human body and other surrounding objects are obviously different, so that the detection accuracy can be improved.
Meanwhile, due to the difference of the temperature of each part of the human body, the gray value of the human body area in the infrared image is greatly changed, and the gray distribution is uneven. And other objects except for the human body, for example, the other objects may include a light pole, an anti-collision column, a big tree, a green belt, a house, and the like, and the temperature difference of each part is small, the gray value of the corresponding part of the infrared image is not changed greatly, and the gray distribution is uniform.
Therefore, according to the above-described characteristics of the infrared image, step S104 in the embodiment shown in fig. 1 may be modified to improve the accuracy in detecting the target pedestrian area.
In another implementation manner based on the embodiment shown in fig. 1, the step S104 of detecting the target pedestrian area in the road image may include the following steps:
step 1: and detecting a suspected pedestrian area in the road image.
Step 2: and calculating the gray distribution uniformity value of the suspected pedestrian area. The gray distribution uniformity is used for representing the uniformity degree of the gray distribution in the suspected pedestrian area.
Specifically, in order to improve the detection accuracy, when the gray distribution uniformity value of the suspected pedestrian area is calculated, the gray distribution uniformity of the lower 2/3 portion in the suspected pedestrian area may be calculated.
This is because objects such as light poles and utility poles in the suspected pedestrian area are not rich in texture features at the lower portion 2/3, and have uniform gradation distribution. And 2/3 parts of human bodies have rich texture characteristics and uneven gray distribution. Therefore, according to this feature, a human body can be further distinguished from other objects.
It should be noted that the uniformity of the gray scale distribution in the present embodiment can be described by using two-dimensional gray scale entropy. For a plurality of suspected pedestrian areas, the two-dimensional gray entropy of the head and shoulder detection results with high confidence can be counted, real-time updating is carried out, the average value is used as a self-adaptive threshold, and the head and shoulder detection results with low confidence which are smaller than the threshold are deleted.
And step 3: and (4) judging whether the gray distribution uniformity value is larger than a preset uniformity value, if so, indicating that the suspected pedestrian area belongs to the human body in a high probability, and continuing to execute the following step (4). If not, the suspected pedestrian area is indicated to belong to other objects except the human body with high probability.
And 4, step 4: and determining the suspected pedestrian area as a target pedestrian area.
Therefore, in the embodiment, the suspected pedestrian area is screened according to the characteristic that the gray distribution of the human body part is more uneven compared with that of other objects, the target pedestrian area is determined, and the detection accuracy can be improved.
Fig. 6 is a schematic structural diagram of a pedestrian warning apparatus provided in an embodiment of the present application, corresponding to the method embodiment shown in fig. 1, and applied to a vehicle equipped with an image capturing device, where the apparatus includes:
an image obtaining module 601 for obtaining a road image corresponding to a driving direction of the vehicle;
a parameter obtaining module 602, configured to obtain target device parameters of the image capturing device;
the area determining module 603 is configured to determine, according to the target device parameter, a target virtual lane area, which is fixedly connected to the vehicle, in the road image;
a pedestrian detection module 604 for detecting a target pedestrian region in the road image;
and a pedestrian early warning module 605, configured to perform early warning according to a position relationship between the target pedestrian region and the target virtual lane region.
In another embodiment of the present application, the embodiment shown in fig. 6 may be modified. In the embodiment shown in fig. 6, the area determining module 603 may specifically include the following sub-modules shown in fig. 7, which correspond to the embodiment shown in fig. 3.
An upper and lower boundary determining submodule 6031 for determining a longitudinal upper and lower boundary position of the target virtual lane area according to the target device parameter and a preset alarm distance value and according to an imaging principle; the target virtual lane area is an area fixedly connected with the vehicle in the road image;
and a left-right boundary determining submodule 6032, configured to determine, according to the target device parameter, a preset width, and the alarm distance value, a horizontal left-right boundary position of the target virtual lane area according to an imaging principle.
In another implementation manner based on the embodiment shown in fig. 7, the upper and lower boundary determining sub-module 6031 may be specifically configured to:
according to the followingFormula, determining the longitudinal upper boundary position py of the target virtual lane areaOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition device comprises a road image acquisition device, a light source and a light source, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and y1 is a preset alarm distance value.
In another implementation manner based on the embodiment shown in fig. 7, when the image capturing apparatus is installed on a central plane in the left-right direction of the vehicle, the left-right boundary determining sub-module 6032 may be specifically configured to:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]The method includes the steps that im _ w is the number of transverse total pixels of the road image on an imaging plane, im _ H is the number of longitudinal total pixels of the road image on the imaging plane, s is the pixel size of the image acquisition equipment, f is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, lane _ w is a preset width, H is the installation height of the image acquisition equipment, and y1 is the alarm distance value.
In another implementation manner based on the embodiment shown in fig. 7, the pedestrian detection module 604 may include:
a first detection sub-module (not shown in the figure) for detecting a suspected pedestrian area in the road image;
a first calculating sub-module (not shown in the figure) for calculating an actual height value corresponding to the suspected pedestrian area according to the target device parameter and an imaging principle;
a first judgment sub-module (not shown in the figure) for judging whether the actual height value is at [ h ]0/Th,h0*Th]In the range, wherein h0The Th is a preset height threshold value, and the Th is a preset value;
a first determining submodule (not shown in the figure) for determining when said actual height value is at [ h ]0/Th,h0*Th]And when the area is within the range, determining a target pedestrian area in the road image according to the suspected pedestrian area.
In another implementation manner based on the embodiment shown in fig. 7, the calculation sub-module is specifically configured to:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]) The image acquisition device comprises a road image acquisition device, a suspected pedestrian area, a camera module and a camera module, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and a horizontal plane, H is the installation height of the image acquisition device, and ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area respectively.
In another implementation manner based on the embodiment shown in fig. 7, when the road image is an infrared image, the pedestrian detection module 604 may be specifically configured to:
and inputting the road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-collected sample infrared image according to a preset machine learning algorithm.
In another implementation manner based on the embodiment shown in fig. 7, when the road image is an infrared image, the pedestrian detection module 604 may include:
a second detection submodule (not shown in the figure) for detecting a suspected pedestrian area in the road image;
a second calculating submodule (not shown in the figure) for calculating a gray distribution uniformity value of the suspected pedestrian area;
a second determining submodule (not shown in the figure) for determining whether the gray distribution uniformity value is greater than a preset uniformity value;
and a second determining submodule (not shown in the figure) configured to determine the suspected pedestrian area as a target pedestrian area when the gray distribution uniformity value is greater than a preset uniformity value.
In another implementation manner based on the embodiment shown in fig. 7, when the road image is an infrared image, the pedestrian detection module 604 may include:
a third detection submodule (not shown in the figure) for detecting a suspected pedestrian area in the road image;
an obtaining sub-module (not shown in the figure) for obtaining a gray average value of the suspected pedestrian area;
a third determining sub-module (not shown in the figure) for determining whether the average value of the gray levels is greater than a preset gray level threshold;
and a third determining sub-module (not shown in the figure) for determining the suspected pedestrian area as the target pedestrian area when the gray average value is greater than a preset gray threshold value.
Since the device embodiment is obtained based on the method embodiment and has the same technical effect as the method, the technical effect of the device embodiment is not described herein again.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to some descriptions of the method embodiment for relevant points.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be understood by those skilled in the art that all or part of the steps in the above embodiments can be implemented by hardware associated with program instructions, and the program can be stored in a computer readable storage medium. The storage medium referred to herein is a ROM/RAM, a magnetic disk, an optical disk, or the like.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (16)

1. A pedestrian early warning method is applied to a vehicle provided with an image acquisition device, and comprises the following steps:
acquiring a road image which is acquired by the image acquisition equipment and corresponds to the driving direction of the vehicle;
obtaining target equipment parameters of the image acquisition equipment;
determining a target virtual lane area fixedly connected with the vehicle in the road image according to the target equipment parameters;
detecting a target pedestrian region in the road image;
early warning is carried out according to the position relation between the target pedestrian area and the target virtual lane area;
the detecting a target pedestrian region in the road image includes:
detecting a suspected pedestrian area in the road image;
calculating an actual height value corresponding to the suspected pedestrian area according to the target equipment parameters and an imaging principle;
judging whether the actual height value is in [ h ]0/Th,h0*Th]In the range, wherein h0The Th is a preset height threshold value, and the Th is a preset value;
if yes, determining a target pedestrian area in the road image according to the suspected pedestrian area.
2. The method of claim 1, wherein the determining a target virtual lane area in the road image that is fixedly connected with the vehicle according to the target device parameter comprises:
determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the target equipment parameters and a preset alarm distance value and according to an imaging principle; the target virtual lane area is an area fixedly connected with the vehicle in the road image;
and determining the horizontal left and right boundary line positions of the target virtual lane area according to the target equipment parameters, the preset width and the alarm distance value and according to an imaging principle.
3. The method of claim 2, wherein the determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the imaging principle according to the target device parameters and the preset alarm distance value comprises:
determining a boundary position in the longitudinal direction of the target virtual lane area according to the following formulaPut pyOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition device comprises a road image acquisition device, a light source and a light source, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and y1 is a preset alarm distance value.
4. The method according to claim 2, wherein the determining the lateral left and right boundary line positions of the target virtual lane area according to the imaging principle based on the target device parameter and the preset width and the warning distance value when the image capturing device is installed on the central plane in the left-right direction of the vehicle comprises:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]The image acquisition device comprises a road image acquisition device, a camera and a camera, wherein im _ w is the number of transverse total pixels of the road image on an imaging plane, im _ H is the number of longitudinal total pixels of the road image on the imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, lane _ w is a preset width, and H is the installation height of the image acquisition deviceAnd y1 is the alarm distance value.
5. The method according to claim 1, wherein the calculating an actual height value corresponding to the suspected pedestrian area according to the target device parameters and an imaging principle comprises:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]) The image acquisition device comprises a road image acquisition device, a suspected pedestrian area, a camera module and a camera module, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and a horizontal plane, H is the installation height of the image acquisition device, and ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area respectively.
6. The method of claim 1, wherein when the road image is an infrared image, the detecting a target pedestrian region in the road image comprises:
and inputting the road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-collected sample infrared image according to a preset machine learning algorithm.
7. The method of claim 1, wherein when the road image is an infrared image, the detecting a target pedestrian region in the road image comprises:
detecting a suspected pedestrian area in the road image;
calculating the gray level distribution uniformity value of the suspected pedestrian area;
judging whether the gray distribution uniformity value is larger than a preset uniformity value or not;
and if so, determining the suspected pedestrian area as a target pedestrian area.
8. The method of claim 1, wherein when the road image is an infrared image, the detecting a target pedestrian region in the road image comprises:
detecting a suspected pedestrian area in the road image;
obtaining the gray average value of the suspected pedestrian area;
judging whether the gray average value is larger than a preset gray threshold value or not;
and if so, determining the suspected pedestrian area as a target pedestrian area.
9. The pedestrian early warning device is characterized by being applied to a vehicle provided with an image acquisition device, and comprising:
the image acquisition module is used for acquiring a road image which is acquired by the image acquisition equipment and corresponds to the driving direction of the vehicle;
the parameter obtaining module is used for obtaining target equipment parameters of the image acquisition equipment;
the region determining module is used for determining a target virtual lane region fixedly connected with the vehicle in the road image according to the target equipment parameter;
the pedestrian detection module is used for detecting a target pedestrian area in the road image;
the pedestrian early warning module is used for early warning according to the position relation between the target pedestrian area and the target virtual lane area;
the pedestrian detection module includes:
the first detection submodule is used for detecting a suspected pedestrian area in the road image;
the first calculation submodule is used for calculating an actual height value corresponding to the suspected pedestrian area according to the target equipment parameters and an imaging principle;
a first judgment submodule for judging whether the actual height value is at [ h ]0/Th,h0*Th]In the range, wherein h0The Th is a preset height threshold value, and the Th is a preset value;
a first determination submodule for determining when said actual height value is at [ h ]0/Th,h0*Th]And when the area is within the range, determining a target pedestrian area in the road image according to the suspected pedestrian area.
10. The apparatus of claim 9, wherein the region determining module comprises:
the upper and lower boundary determining submodule is used for determining the longitudinal upper and lower boundary positions of the target virtual lane area according to the target equipment parameters and a preset alarm distance value and according to an imaging principle; the target virtual lane area is an area fixedly connected with the vehicle in the road image;
and the left and right boundary determining submodule is used for determining the positions of the transverse left and right boundary lines of the target virtual lane area according to the target equipment parameters, the preset width and the alarm distance value and according to an imaging principle.
11. The apparatus of claim 10, wherein the upper and lower boundary determination submodule is specifically configured to:
the longitudinal upper boundary position py of the target virtual lane area is determined according to the following formulaOn the upper partAnd lower boundary line position pyLower part
pyOn the upper part=im_h*s/2+f*tan[θ-arctan(H/y1)],pyLower part=0
The image acquisition device comprises a road image acquisition device, a light source and a light source, wherein im _ H is the longitudinal total pixel number of the road image on an imaging plane, s is the pixel size of the image acquisition device, f is the focal length of an optical element in the image acquisition device, theta is the included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and y1 is a preset alarm distance value.
12. The apparatus of claim 11, wherein the left-right boundary determination submodule, when the image capture device is mounted on a center plane of the vehicle in a left-right direction, is configured to:
determining a point px on a lateral left boundary line of the target virtual lane region according to the following formulaLeft 1Point pxLeft 2And point px on the right boundary lineRight 1Point pxRight 2
pxLeft 1=(x1-lane_w/2)/(2*x1),pxRight 1=(x1+lane_w/2)/(2*x1)
pxLeft 2=(x2-lane_w/2)/(2*x2),pxRight 2=(x2+lane_w/2)/(2*x2)
Wherein, x1 ═ d1 ═ im _ w ·/(2 ×) s, d12=H2+y12,x2=d2*im_w*s/(2*f),d22=H2+y22,y2=H/tan[arctan(im_h*s/2f)+θ]The method includes the steps that im _ w is the number of transverse total pixels of the road image on an imaging plane, im _ H is the number of longitudinal total pixels of the road image on the imaging plane, s is the pixel size of the image acquisition equipment, f is the focal length of an optical element in the image acquisition equipment, theta is the included angle between the optical axis of the optical element and the horizontal plane, lane _ w is a preset width, H is the installation height of the image acquisition equipment, and y1 is the alarm distance value.
13. The apparatus according to claim 9, wherein the computation submodule is specifically configured to:
calculating the actual height value h corresponding to the suspected pedestrian area according to the following formulaHuman being
hHuman being=H-yHuman beingtan(θ-arctan[(ry2-im_h*s)/f])
Wherein, yHuman being=H/tan(θ-arctan[(ry1-im_h*s)/f]) The im _ h is the longitudinal total pixel number of the road image on an imaging plane, the s is the pixel size of the image acquisition equipment, and the f is the light in the image acquisition equipmentThe focal length of the optical element, θ is an included angle between the optical axis of the optical element and the horizontal plane, H is the installation height of the image acquisition device, and ry1 and ry2 are the longitudinal upper and lower boundary positions of the suspected pedestrian area, respectively.
14. The apparatus according to claim 9, wherein when the road image is an infrared image, the pedestrian detection module is specifically configured to:
and inputting the road image into a pre-generated infrared image detection model to obtain a target pedestrian region in the road image, wherein the infrared image detection model is generated by training a pre-collected sample infrared image according to a preset machine learning algorithm.
15. The apparatus of claim 9, wherein when the road image is an infrared image, the pedestrian detection module comprises:
the second detection submodule is used for detecting a suspected pedestrian area in the road image;
the second calculation submodule is used for calculating the gray level distribution uniformity value of the suspected pedestrian area;
the second judgment submodule is used for judging whether the gray distribution uniformity value is larger than a preset uniformity value or not;
and the second determining submodule is used for determining the suspected pedestrian area as a target pedestrian area when the gray distribution uniformity value is larger than a preset uniformity value.
16. The apparatus of claim 9, wherein when the road image is an infrared image, the pedestrian detection module comprises:
the third detection submodule is used for detecting a suspected pedestrian area in the road image;
an obtaining submodule for obtaining a gray average value of the suspected pedestrian area;
the third judgment submodule is used for judging whether the gray average value is larger than a preset gray threshold value or not;
and the third determining submodule is used for determining the suspected pedestrian area as a target pedestrian area when the gray average value is larger than a preset gray threshold value.
CN201610958798.XA 2016-11-03 2016-11-03 Pedestrian early warning method and device Active CN108021849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610958798.XA CN108021849B (en) 2016-11-03 2016-11-03 Pedestrian early warning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610958798.XA CN108021849B (en) 2016-11-03 2016-11-03 Pedestrian early warning method and device

Publications (2)

Publication Number Publication Date
CN108021849A CN108021849A (en) 2018-05-11
CN108021849B true CN108021849B (en) 2022-04-05

Family

ID=62083978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610958798.XA Active CN108021849B (en) 2016-11-03 2016-11-03 Pedestrian early warning method and device

Country Status (1)

Country Link
CN (1) CN108021849B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389073A (en) * 2018-09-29 2019-02-26 北京工业大学 The method and device of detection pedestrian area is determined by vehicle-mounted camera
CN109871787B (en) * 2019-01-30 2021-05-25 浙江吉利汽车研究院有限公司 Obstacle detection method and device
CN111081020B (en) * 2019-12-26 2020-11-10 安徽揣菲克科技有限公司 Vehicle-mounted traffic accident early warning device based on cloud edge combination
CN112383992B (en) * 2020-11-11 2021-10-26 上饶金黄光科研院有限公司 Control method for intelligent partitioned illumination of energy-saving street lamp
CN114821542B (en) * 2022-06-23 2022-09-09 小米汽车科技有限公司 Target detection method, target detection device, vehicle and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102765365A (en) * 2011-05-06 2012-11-07 香港生产力促进局 Pedestrian detection method based on machine vision and pedestrian anti-collision warning system based on machine vision
CN103770780A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Vehicle active safety system alarm shielding device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5863607B2 (en) * 2012-09-07 2016-02-16 オートリブ ディベロップメント エービー Pedestrian warning device
JP6188471B2 (en) * 2013-07-26 2017-08-30 アルパイン株式会社 Vehicle rear side warning device, vehicle rear side warning method, and three-dimensional object detection device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102765365A (en) * 2011-05-06 2012-11-07 香港生产力促进局 Pedestrian detection method based on machine vision and pedestrian anti-collision warning system based on machine vision
CN103770780A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Vehicle active safety system alarm shielding device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
城市道路汽车防碰撞安全***行人识别算法研究;刘前飞;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20160915(第9期);第2章 *

Also Published As

Publication number Publication date
CN108021849A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
US11854272B2 (en) Hazard detection from a camera in a scene with moving shadows
CN108021849B (en) Pedestrian early warning method and device
US8810653B2 (en) Vehicle surroundings monitoring apparatus
CN104916163B (en) Parking space detection method
JP3822515B2 (en) Obstacle detection device and method
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
JP2007234019A (en) Vehicle image area specifying device and method for it
CN105303157B (en) Extend the algorithm of the detection range for the detection of AVM stop line
CN108197523B (en) Night vehicle detection method and system based on image conversion and contour neighborhood difference
KR20080024776A (en) Method and apparatus for recognizing parking slot marking by using hough transformation and parking assist system using same
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
Adamshuk et al. On the applicability of inverse perspective mapping for the forward distance estimation based on the HSV colormap
US7885430B2 (en) Automotive environment monitoring device, vehicle with the automotive environment monitoring device, and automotive environment monitoring program
JP4813304B2 (en) Vehicle periphery monitoring device
JP5276032B2 (en) Device for calculating the position of an object
JP4765113B2 (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
JP5783319B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
Lin et al. Adaptive inverse perspective mapping transformation method for ballasted railway based on differential edge detection and improved perspective mapping model
CN105291982B (en) Stopping thread detector rapidly and reliably
CN103680148A (en) Method for identifying taxis
KR100976142B1 (en) detection method of road vehicles
JP4333683B2 (en) Windshield range detection device, method and program
TWI253998B (en) Method and apparatus for obstacle avoidance with camera vision
Zheng et al. An image-based object detection method using two cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant