CN117274939B - Safety area detection method and safety area detection device - Google Patents

Safety area detection method and safety area detection device Download PDF

Info

Publication number
CN117274939B
CN117274939B CN202311292462.0A CN202311292462A CN117274939B CN 117274939 B CN117274939 B CN 117274939B CN 202311292462 A CN202311292462 A CN 202311292462A CN 117274939 B CN117274939 B CN 117274939B
Authority
CN
China
Prior art keywords
vertex
area
rectangular
detected
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311292462.0A
Other languages
Chinese (zh)
Other versions
CN117274939A (en
Inventor
谢意
那崇宁
蒋先尧
刘志勇
赵磊
高志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lukaizhixing Technology Co ltd
Original Assignee
Beijing Lukaizhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lukaizhixing Technology Co ltd filed Critical Beijing Lukaizhixing Technology Co ltd
Priority to CN202311292462.0A priority Critical patent/CN117274939B/en
Publication of CN117274939A publication Critical patent/CN117274939A/en
Application granted granted Critical
Publication of CN117274939B publication Critical patent/CN117274939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a safety area detection method and device, belongs to the technical field of unmanned mining, solves the problem that quantitative analysis cannot be performed in safety area detection, and is mainly applied to mining operation. The safety area detection method is suitable for the mining unmanned vehicle and comprises the following steps: collecting road image data through a camera; road surface segmentation is carried out on the acquired road image data so as to segment a road surface area and a non-road surface area, and a Mask image of road surface segmentation is obtained; determining a region to be detected based on parameters of the mining unmanned vehicle; dividing the to-be-detected area into a first number of to-be-detected sub-areas, and mapping the information of each to-be-detected sub-area into Mask images; performing perspective transformation on the Mask image mapped with the information of the to-be-detected areas, and obtaining perspective rectangular areas, wherein the perspective rectangular areas comprise perspective rectangular sub-areas corresponding to each to-be-detected sub-area; and carrying out binarization processing on the perspective rectangular region, and carrying out safety detection based on the binarized perspective rectangular sub-region.

Description

Safety area detection method and safety area detection device
Technical Field
The invention relates to the technical field of unmanned mining, in particular to a safety area detection method and a safety area detection device.
Background
In mining area unmanned scenes, mining unmanned vehicles or mine cars are important in running safety. The mine car needs to detect the road surface in front of the car so as to avoid the car, pedestrians and the like in front of the mine car, thereby ensuring that the car can safely run in front of the car.
In the related art, there are three methods for detection: one is based on front safety zone detection of the camera; one is front safety area detection based on lidar; the last is the front safety zone detection of the combination of lidar and camera. The detection mode based on the laser radar has higher cost, and the detection mode based on the camera has lower cost.
Disclosure of Invention
In some examples, camera-based safe zone detection can only provide a detection result of whether the detected zone is a safe zone (i.e., only one qualitative analysis can be performed), but cannot determine how large the front zone is a safe zone (i.e., give the result of a quantitative analysis). Therefore, the result obtained by the detection method cannot give a specific range of the safety area, so that the running of the unmanned vehicle for the mine cannot be guided better, and the potential safety hazard is increased.
In order to solve at least one of the above problems and disadvantages of the prior art, the present invention provides a safe area detection method and a safe area detection apparatus, which realize quantitative analysis of a safe area.
According to an aspect of the present invention, there is provided a safety zone detection method suitable for a mining unmanned vehicle, the safety zone detection method comprising:
collecting road image data through a camera positioned on a mining unmanned vehicle;
Road surface segmentation is carried out on the acquired road image data so as to segment a road surface area and a non-road surface area, and a Mask image of road surface segmentation is obtained;
determining a region to be detected positioned in front of the mining unmanned vehicle based on parameters of the mining unmanned vehicle, wherein the shape of the region to be detected is quadrilateral;
dividing the to-be-detected area into a first number of to-be-detected sub-areas, and mapping the information of each to-be-detected sub-area into Mask images;
Performing perspective transformation on the Mask image mapped with the information of the to-be-detected areas, and obtaining perspective rectangular areas, wherein the perspective rectangular areas comprise perspective rectangular sub-areas corresponding to each to-be-detected sub-area;
and carrying out binarization processing on the perspective rectangular region, and carrying out safety detection based on the binarized perspective rectangular sub-region.
According to another aspect of the present invention, there is also provided a safety area detection apparatus including:
A camera configured to collect road image data;
a segmentation module communicatively connected to the camera and configured to segment road image data from the camera to segment road areas and non-road areas and obtain Mask images of the road segmentation;
The determining module is configured to determine a to-be-detected area positioned in front of the mining unmanned vehicle based on parameters of the mining unmanned vehicle, and the to-be-detected area is quadrilateral in shape;
The detection module is respectively in communication connection with the segmentation module and the determination module, and is configured to divide the region to be detected into a first number of sub-regions to be detected, map the information of each sub-region to be detected into a Mask image, perform perspective transformation on the Mask image mapped with the information of the region to be detected, obtain a perspective rectangular region, wherein the perspective rectangular region comprises a perspective rectangular sub-region corresponding to each sub-region to be detected, perform binarization processing on the perspective rectangular region, and detect a safety region based on the binarized perspective rectangular sub-region.
The safety zone detection method and the safety zone detection device according to the present invention have at least one of the following advantages:
(1) The safety area detection method and the safety area detection device realize the detection of the safety area in front of the vehicle based on the combination of pavement segmentation and multilayer perspective transformation, and can inform the safety of the front of the vehicle within a certain meter;
(2) The safety area detection method and the safety area detection device can adjust the area to be detected based on the steering condition of the vehicle, so as to provide a more accurate safety area detection result for the steering of the vehicle;
(3) The safety area detection method and the safety area detection device can analyze and process in real time, and the safety detection efficiency is improved;
(4) The safety area detection method and the safety area detection device can provide accurate detection results for the car control and planning module, so that the car control and planning module can conveniently adjust the route and speed of a mine car, and the operation safety of a mining area is improved.
Drawings
These and/or other aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method of secure enclave detection according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a determined area to be inspected when the vehicle is not turning in accordance with one embodiment of the invention;
FIG. 3 is a schematic illustration of a region of interest determined when a vehicle is turning in accordance with an embodiment of the present invention;
fig. 4 is an example of a security area detection method according to an embodiment of the present invention;
FIG. 5 is an effect diagram of mapping a region to be detected to a Mask image in the method for detecting a security region according to the embodiment of FIG. 4;
FIG. 6 is an alarm chart that can be provided by the method for detecting a safe area according to the embodiment of FIG. 4;
fig. 7 is a schematic structural view of a safety area detection device according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of embodiments of the present invention with reference to the accompanying drawings is intended to illustrate the general inventive concept and should not be taken as limiting the invention.
In an embodiment of the invention, a security area detection method is provided. The safety area detection method is suitable for the mining unmanned vehicle. The safety region detection method analyzes and processes images acquired by a camera to extract a drivable region in the images, maps a region (region to be detected) planned under a vehicle coordinate system into the images, and analyzes regions in the images in a layered manner to obtain the safety region.
As shown in fig. 1, the method for detecting a safety area includes:
collecting road image data through a camera positioned on a mining unmanned vehicle;
Road surface segmentation is carried out on the acquired road image data so as to segment a road surface area and a non-road surface area, and a Mask image of road surface segmentation is obtained;
determining a region to be detected positioned in front of the mining unmanned vehicle based on parameters of the mining unmanned vehicle, wherein the shape of the region to be detected is quadrilateral;
dividing the to-be-detected area into a first number of to-be-detected sub-areas, and mapping the information of each to-be-detected sub-area into Mask images;
Performing perspective transformation on the Mask image mapped with the information of the to-be-detected areas, and obtaining perspective rectangular areas, wherein the perspective rectangular areas comprise perspective rectangular sub-areas corresponding to each to-be-detected sub-area;
and carrying out binarization processing on the perspective rectangular region, and carrying out safety detection based on the binarized perspective rectangular sub-region.
Embodiments of the present invention process images acquired by a camera based on road segmentation to extract a travelable region (e.g., a road surface region); and determining a to-be-detected area based on parameters of the mining unmanned vehicle, mapping the to-be-detected area into an image, analyzing the area layering in the image by a multi-layer perspective transformation method, and detecting a safety area in front of the vehicle.
Road image data is acquired by a camera. In one example, a camera may be positioned at the very middle of the head of the mining unmanned vehicle to facilitate acquisition of road image data. The process of capturing road image data by the camera may be performed in real time or may be a stored picture or video. The process of collection may be performed by manual triggering or may be performed automatically. The image data may be in a picture format or in the form of a video stream.
In one embodiment, the secure enclave detection method includes loading reference data for use in a subsequent step. The reference data comprise internal parameters, external parameters, distortion coefficients and a pavement segmentation model of the camera. For example, the reference data may be stored at the cloud platform, so the reference data is loaded from the cloud platform.
The camera's internal parameters, external parameters and distortion coefficients are used to map the information (in the form of image data) of the sub-area to be inspected into the Mask image. In an example, the internal parameters, external parameters and distortion coefficients of the camera can be obtained by calibrating the camera. In the example of the invention, a projection point of a central point of the front side of the head of the mining unmanned vehicle on the ground is taken as a coordinate origin O, the running direction of the vehicle is taken as a positive y-axis direction, the horizontal right direction perpendicular to the running direction of the vehicle is taken as a positive x-axis direction, a z-axis is determined according to a right-hand rule, and the z-axis takes the horizontal ground as a zero point. Specifically, the camera is calibrated by using an opencv checkerboard method, and internal parameters, external parameters and distortion coefficients of the camera are obtained after calibration is completed. The specific calibration mode and related technical implementation can be performed in an existing mode, and will not be described in detail here. Of course, the person skilled in the art may also use other ways to calibrate the camera, as long as the internal parameters, external parameters, distortion coefficients and road segmentation model of the camera can be obtained.
The road surface segmentation model is used for carrying out road surface segmentation on the acquired road image data so as to extract a travelable area. In one example, the pavement segmentation module is obtained by: collecting a mining area pavement data set; marking a pavement on pavement data of a mining area; and training the pavement segmentation model by adopting a deep learning mode.
The training of the deep learning segmentation model requires a large amount of data, so that a camera on the mining unmanned vehicle is utilized to collect a large amount of mining area picture data and is arranged so as to complete collection of a mining area pavement data set.
On the collected road surface data, marking is carried out by using a Labelme tool, and the road surface is framed by a closed curve and named road so as to finish marking the road surface.
In one example, the deep learning training process may employ a DeeplabV3+ network model and be built through a pytorch platform. The parameters of the deep learning training process are as follows: the input data is an image with the resolution of 1920 x 1080, and the output data is an array with the resolution of 1920 x 1080; the number of channels is set to 1, and the value inside is True or False, wherein True represents a road surface and False represents a non-road surface. Of course, embodiments of the present invention are not limited to a specific deep learning training process, as long as a desired road segmentation model can be obtained.
After the road image data is collected, road surface segmentation is required for the road image data. The road segmentation model may be loaded from the cloud platform to segment the road image data from the camera. Specifically, the segmentation module receives road image data (e.g., road pictures) from the camera, loads a road segmentation model from the cloud platform, inputs the road pictures into the road segmentation model, segments road areas and non-road areas through the road segmentation model, and determines the road pictures segmented out of the road areas and the non-road areas as Mask images of the road segmentation. In one example, the Mask image is further modified using an expansion, erosion operation.
In order to achieve accurate detection of the safe area, it is necessary to analyze in combination with the actual situation of the vehicle. The embodiment of the invention determines the area to be inspected in front of the mining unmanned vehicle based on the parameters of the mining unmanned vehicle. The shape of the area to be detected is quadrilateral.
Parameters of the mining unmanned vehicle include the width of the vehicle, the running speed and the steering condition of the vehicle.
When the mining unmanned vehicle is not steering, determining a rectangular to-be-detected area in front of the mining unmanned vehicle based on the width and the running speed of the vehicle. The rectangular region to be inspected comprises a first side and a second side which are perpendicular to each other. The length of the first edge is less than or equal to the length of the second edge. The rectangular region to be inspected includes two first sides and two second sides. The projection line of the front side of the head of the mining unmanned vehicle on the ground is positioned on the first side, and the center point of the projection line coincides with the center point of the first side. The length of the first side of the rectangular area to be inspected is determined based on the width of the vehicle. Specifically, the length of the first edge is equal to or greater than the width of the vehicle, e.g., the length of the first edge is 0.05-0.5 meters longer than the width of the vehicle. The length of the second side of the rectangular region to be inspected is determined based on the running speed of the vehicle. The length of the second side is required to ensure that the vehicle will not accident with other vehicles in the event of sudden braking. The length of the second side of the rectangular region to be inspected can be set as desired by a person skilled in the art.
In one example, for a mine truck having a width of 3.3 meters and a speed limit of 40 km/h in the mine where the mine truck is located, the width of the rectangular area to be inspected is 3.4 meters and the length of the rectangular area to be inspected is 20 meters.
When the vehicle turns, the running direction of the vehicle is not right ahead, but can deflect along with the turning angle, and if the to-be-detected area is determined according to the condition that the vehicle does not turn, the detection result is inaccurate, and the safe running is affected. Therefore, when the mining unmanned vehicle is steered at a steering angle δ, a trapezoid area to be inspected is determined based on the width of the vehicle, the running speed, and the steering angle δ. The trapezoid area to be detected comprises two parallel third sides and a fourth side. The trapezoid to-be-detected area further comprises a fifth side and a sixth side which are respectively connected with the two ends of the third side and the fourth side, and the fifth side and the sixth side are not parallel. The projection line of the front side of the head of the mining unmanned vehicle on the ground is positioned on one side of the third side and the fourth side.
In an example, a three-dimensional first right-angle coordinate system is constructed, a projection point V of a central point of the front side of a head of the mining unmanned vehicle on the ground is taken as an origin of coordinates O, the running direction of the mining unmanned vehicle is taken as a positive y-axis direction, the horizontal right direction perpendicular to the running direction of the vehicle is taken as a positive x-axis direction, a z-axis is determined according to a right-hand rule, and the z-axis takes the horizontal ground as a zero point. The mining unmanned vehicle is symmetrical about a y-axis of a first right angle coordinate system.
When the to-be-detected area is a rectangular to-be-detected area, the first side where the projection line is located on the x-axis of the first right-angle coordinate system, and the second side of the rectangular to-be-detected area is parallel to the y-axis of the first right-angle coordinate system.
When the to-be-detected area is a trapezoid to-be-detected area, the side of the trapezoid to-be-detected area where the projection line is located on the x-axis of the first right-angle coordinate system.
Fig. 2 shows a rectangular region to be inspected R1R2R3R4 when the vehicle is not turning, wherein the grey shaded portion shows the vehicle. The rectangular region to be inspected is surrounded by a first vertex R1, a second vertex R2, a third vertex R3 and a fourth vertex R4. The first vertex R1 and the second vertex R2, and the third vertex R3 and the fourth vertex R4 respectively form two first edges; the first vertex R1 and the fourth vertex R4, and the second vertex R2 and the third vertex R3 respectively constitute two second sides. The projection line of the front side of the vehicle head on the ground is positioned on a first side R3R4, and the first side R3R4 is positioned on the x axis of a first right angle coordinate system.
Fig. 3 shows a trapezoid-shaped test area BMNS when the vehicle is turning. The area BMNS to be inspected of the trapezoid is defined by a first vertex B, a second vertex M, a third vertex N, and a fourth vertex S. The third vertex N and the fourth vertex S form a third side NS, which is located on the x-axis. The second vertex M and the first vertex B form a fourth MB. The second vertex M and the third vertex N form a fifth edge MN, and the first vertex B and the fourth vertex S form a sixth edge BS. The projection line of the front side of the head of the mining unmanned vehicle on the ground is positioned on the third side NS.
In order to map the information of the region to be detected into the Mask image in the later period, the position information of the region to be detected needs to be determined, for example, the position coordinates of the region to be detected in the first right-angle coordinate system needs to be determined.
In the example of fig. 2, the length of the first side of the rectangular region to be inspected is l1, the length of the second side is l2, then the position coordinates of the third vertex R3 in the first rectangular coordinate system is (-l 1/2,0,0), the position coordinates of the fourth vertex R4 in the first rectangular coordinate system is (l 1/2,0,0), the position coordinates of the first vertex R1 in the first rectangular coordinate system are (l 1/2, l2, 0), and the position coordinates of the second vertex R2 in the first rectangular coordinate system are (-l 1/2, l2, 0), whereby the position information of the rectangular region to be inspected can be determined.
In the example of fig. 3, the first vertex B is determined based on a first reference point P, which is determined based on the length of the mining unmanned vehicle, a projection point K of the lower left corner of the head on the ground, and a steering angle δ, a second reference point C, which is determined based on a projection point V of the center point of the front side of the head on the ground, the first reference point P, and the length of the second side of the rectangular area to be inspected, which is determined when the mining unmanned vehicle is not steering, a steering angle δ, and the width d of the mining unmanned vehicle; the third vertex N is determined based on the first vertex B, the first reference point P and the x axis; the second vertex M is determined based on the first vertex B and the third vertex N; the fourth vertex S is determined based on the first vertex B, the second reference point P and the x-axis.
How to determine the positions of the first to fourth vertices B to S in the first rectangular coordinate system will be described in detail with reference to fig. 3.
The length (parallel to the running direction) of the head of the mining unmanned vehicle is H. The length of an axle of the mining unmanned vehicle is L, wherein the length of the axle is the distance from a front wheel to the middle point of two rear wheels on the same side. The length of the mining unmanned vehicle is the sum of the length H of the head and the length L of the axle. The width of the mining unmanned vehicle is d. The projection line of the front side of the head of the mining unmanned vehicle on the ground is positioned on the x axis, and the projection point V of the central point of the front side of the head on the ground coincides with the origin of coordinates O of the first right-angle coordinate system.
The first reference point P is determined based on the following steps: a right triangle is constructed, a projection point K of the left lower corner of the vehicle head on the ground and a midpoint J of the left rear wheel of the mining unmanned vehicle are right-angle sides KJ of the right triangle, so that the length of the right-angle sides KJ is the sum of the length H of the vehicle head and the length L of an axle, and the angle KPJ is equal to the angle of a steering angle delta.
The second reference point C is determined based on the following steps: connecting the first reference point P with a projection point V of the center point of the front side of the vehicle head on the ground to form a line segment PV, constructing a first circle by taking the first reference point P as a circle center and taking the length of the line segment PV as a radius, and determining a second reference point C on the first circle, so that the arc length of a first arc VC formed by the projection point V and the second reference point C is equal to the length (l 2, for example) of a second side of the rectangular region to be detected.
The first vertex B is determined based on the following steps: the first reference point P and the second reference point C are connected to form a straight line PC on which the first vertex B is determined such that the length of the line segment BC is equal to half d/2 of the width of the mining unmanned vehicle. The second reference point C is located between the first reference point P and the first vertex B.
Furthermore, the position point D is determined on the straight line PC such that the length of the line segment CD is equal to half D/2. The location point D is located between the second reference point C and the first reference point P. That is, in the embodiment of the present invention, assuming that the vehicle is rotated at the steering angle δ and is driven a predetermined distance (the length of the second side), the projection point V of the center point of the front side of the vehicle head at the ground surface is moved to the second reference point C, the projection point K of the lower left corner of the vehicle head at the ground surface is moved to the first vertex B, and the projection point W of the lower right corner of the vehicle head at the ground surface is moved to the position point D.
The third vertex N is determined based on the following steps: and taking the first reference point P as a circle center and the line segment PB as a radius to form a second circle, so that the second circle intersects with the x-axis, and the obtained intersection point far from the vehicle head is a third vertex N. Away from the head means in the direction of steering away from the head. As shown in fig. 3, the vehicle head turns toward the right, and the third vertex N is located on the left side (away from the right side) of the vehicle.
The second vertex M is determined based on the following steps: and (3) making an external tangent line of the second circle through the third vertex N, making a line parallel to the x axis of the first right-angle coordinate system through the first vertex B, so that the external tangent line intersects with the line parallel to the x axis, and obtaining an intersection point which is the second vertex M.
The fourth vertex S is determined based on the following steps: so that the line segment PB intersects the x-axis of the first rectangular coordinate system, the obtained intersection point is the fourth vertex S.
After each vertex of the trapezoid to be inspected area is determined, position data of the trapezoid to be inspected area under the first right-angle coordinate system needs to be determined. In the embodiment of the invention, the position coordinates of each vertex in a second rectangular coordinate system are determined first, and then the position coordinates are transferred from the second rectangular coordinate system to the first rectangular coordinate system.
Constructing a second rectangular coordinate system based on the following steps: the second reference point P is taken as a coordinate origin, the x axis of the second rectangular coordinate system is parallel to the x axis of the first rectangular coordinate system, the positive direction of the x axis of the second rectangular coordinate system is in the same direction as the positive direction of the x axis of the first rectangular coordinate system, the y axis of the second rectangular coordinate system is parallel to the y axis of the first rectangular coordinate system, the positive direction of the y axis of the second rectangular coordinate system is in the same direction as the positive direction of the y axis of the first rectangular coordinate system, and the midpoint J of the left rear wheel and the center point Q of the rear wheels on two sides of the mining unmanned vehicle are both positioned on the x axis of the second rectangular coordinate system.
In an example, the position coordinates of the first vertex B, the second vertex M, the third vertex N, and the fourth vertex S in the second rectangular coordinates are determined based on the midpoint J of the left rear wheel, the center point Q of the both-side rear wheels, the first reference point P, and the steering angle δ, respectively; the position coordinates of the first vertex B, the second vertex M, the third vertex N, and the fourth vertex S in the first rectangular coordinates are determined based on the position coordinates of the first vertex B, the second vertex M, the third vertex N, and the fourth vertex S in the second rectangular coordinates.
The position coordinates of each vertex in the second rectangular coordinate system are determined by the following processes:
Assume +.kpj= +.ipj= NPJ = δ, where I is the center point of the front wheels on both sides;
Taking P as a circle center, taking PD as a radius to form a third circle, wherein the first circle, the second circle and the third circle are concentric circles;
wherein the point A is the intersection point of the PO connecting line and the second circle,
Therefore, the position coordinate of the third vertex N in the second rectangular coordinate system is N (-NP×cos ++NPJ, NP×sin ++ NPJ).
∠BPJ=∠BPA+∠OPJ
Therefore, the position coordinate of the first vertex B in the second rectangular coordinate system is B (-BP multiplied by cos & lt BPJ, BP multiplied by sin & lt BPJ).
The angle USP= angle BPJ, wherein the U point is an intersection point formed by making a vertical line from the P point to the x axis of the first right-angle coordinate system,
The position coordinate of the fourth vertex S in the second rectangular coordinate system is
Mt=be=bf- (h+l) =bp sin ++bpj- (h+l), wherein the T point is an intersection point formed by making a perpendicular to the x-axis of the first rectangular coordinate system through the M point; e point is the intersection point formed by making a vertical line to the x axis of the first right-angle coordinate system from the B point; the point F is an intersection point formed by making a vertical line from the point B to the x axis of the second rectangular coordinate system; the length of the straight line EF is (H+L);
∠NMT=∠TNP=∠NPJ
NT=tan∠NMT×MT=tan∠NPJ×MT
therefore, the position coordinate of the second vertex M in the second rectangular coordinate system is M (-NP×cos ++ NPJ +tan ++ NPJ ×MT, BP×sin ++BPJ).
Mapping the position coordinates of each vertex in the second rectangular coordinate system to the first rectangular coordinate system by the following procedures:
The coordinates of the third vertex N in the first rectangular coordinate system are That is, the coordinates of the N point move rightward on the x-axisMoves down (h+l) on the y-axis. Correspondingly, the coordinates of the other vertices also correspond to movement to the right/>, on the x-axisMoves down (h+l) on the y-axis. In addition, since np×sin +. NPJ =h+l, the coordinate value of the N point on the y axis is 0.
Therefore, the first vertex B has the following coordinates in the first rectangular coordinate system
The coordinates of the fourth vertex S in the first right-angle coordinate system are
The coordinates of the second vertex M in the first right angle coordinate system are
Here, when the vehicle is steered at the steering angle δ, the region to be inspected may be a region surrounded by an arc BN, a straight line BS, and a straight line SN. For ease of calculation, however, embodiments of the present invention determine a trapezoidal region including the region as a region to be inspected.
The embodiment of the invention determines the position information of the to-be-detected area of the vehicle when the vehicle turns by constructing the model. After determining the position information of the to-be-detected area, dividing the to-be-detected area into a first number of to-be-detected sub-areas, and mapping the information (for example, the position information) of each to-be-detected sub-area into the Mask image. The first number can be detected as required, and the greater the first number, the higher the accuracy of the detection result of the safety area can be provided. In case the length of the second side of the rectangular examination area is 20 meters, the first number may be set in the range of 5-20, for example 10. The first number may be determined, for example, based on the length of the second side, such that the height of each sub-area to be examined (the direction parallel to the second side) is in the range of 1 meter to 5 meters.
The shape of each divided region to be detected is the same as the shape of the region to be detected.
For a rectangular region to be inspected, the shape of the region to be inspected is rectangular. For example, the rectangular region to be detected is divided into a first number of sub-regions to be detected on the y-axis of the first rectangular coordinate system according to a first number, and each sub-region to be detected is rectangular. The first number of sub-areas to be inspected are arranged in sequence according to the distance between the sub-areas and the mining unmanned vehicle. For example, when divided into 10 sub-regions to be inspected, the first sub-region to be inspected is closest to the x-axis of the first rectangular coordinate system; the second sub-region to be detected is adjacent to the first sub-region to be detected, but the distance from the x-axis of the first rectangular coordinate system to the tenth sub-region to be detected is … … longer than the distance from the x-axis of the first rectangular coordinate system to the first sub-region to be detected.
For the trapezoid to-be-detected area, the shape of the to-be-detected area is trapezoid. For example, the trapezoid to-be-detected area is divided into a first number of to-be-detected sub-areas on the y-axis of the first right-angle coordinate system according to a first number, and each to-be-detected sub-area is trapezoid. The first number of sub-areas to be inspected are arranged in sequence according to the distance between the sub-areas and the mining unmanned vehicle. For example, when divided into 10 sub-regions to be inspected, the first sub-region to be inspected is closest to the x-axis of the first rectangular coordinate system; the second sub-region to be detected is adjacent to the first sub-region to be detected, but the distance from the x-axis of the first rectangular coordinate system to the tenth sub-region to be detected is … … longer than the distance from the x-axis of the first rectangular coordinate system to the first sub-region to be detected.
After the first number of sub-regions to be detected are divided, the position coordinates of the four vertexes of each sub-region to be detected under the first right-angle coordinate system can be determined based on the position coordinates of the four vertexes of the region to be detected under the first right-angle coordinate system and the manner of dividing.
For example, for a rectangular region to be inspected, when 10 regions to be inspected are equally divided along the y-axis, the abscissa of the vertices of all the regions to be inspected in the first rectangular coordinate system is the same as the abscissa of the vertices of the corresponding regions to be inspected in the first rectangular coordinate system, and the ordinate changes accordingly.
For the trapezoid to-be-detected area, the determination mode of the vertexes of the to-be-detected sub-area is similar to that of the rectangular to-be-detected sub-area, and detailed changes are not carried out here. One skilled in the art can determine from known geometric relationships.
In one example, mapping the information of each sub-region to be inspected into the Mask image includes: and mapping the position information of the four vertexes of each subarea to be detected into a Mask image based on the internal parameters, the external parameters and the distortion coefficients of the camera. Specifically, by using projectPoints functions and internal parameters, external parameters and distortion coefficients calibrated by a camera, three-dimensional coordinate points of four vertexes of each sub-region to be detected in a first right-angle coordinate system can be directly mapped into a Mask image (namely, on a pixel coordinate system of the image), and the corresponding sub-region to be detected is obtained in the Mask image.
After the region to be detected (or the region to be detected) is mapped into the Mask image, image data is still obtained, and the later statistical analysis is inconvenient. In order to facilitate statistical analysis, perspective transformation is performed on the Mask image mapped with the information of the region to be detected, and a perspective rectangular region is obtained. In this way, the image data can be converted into a matrix, thereby facilitating later statistics.
The specific perspective transformation process comprises the following steps:
Obtaining a perspective transformation matrix through getPerspectiveTransform functions based on Mask images mapped with information of the region to be detected;
And sequentially converting the corresponding sub-regions to be detected in the Mask image into perspective rectangular sub-regions through WARPPERSPECTIVE functions based on the perspective transformation matrix.
And sequentially performing perspective transformation on the corresponding sub-areas to be detected in the first number of Mask images, so as to obtain a perspective rectangular area 1 until the last perspective rectangular area.
And carrying out binarization processing on the perspective rectangular region, and carrying out safety detection based on the binarized perspective rectangular sub-region.
The binarized perspective rectangular area includes a white pixel portion representing a road surface area and a black pixel portion representing a non-road surface area.
The safety area detection process is to judge the ratio of the road surface area in the perspective rectangular area, and when the ratio is higher, the detected area is considered to be safe and has no obstacle; when the ratio is low, the detected area is considered unsafe. The method specifically comprises the following steps:
determining the pixel ratio of the white pixel part in a perspective rectangular subarea in the perspective rectangular subarea;
And determining the size relation between the pixel ratio and a threshold value, and judging whether the perspective rectangular subarea is a safe area or not based on the size relation. When the pixel ratio is greater than or equal to a threshold value, the perspective rectangular area is considered to be safe; when the pixel ratio is less than the threshold, then the perspective rectangular area is considered unsafe. The larger the threshold value is in the range of 0-1, the higher the road surface area ratio is. In one example, the threshold may be 0.98. Of course, the embodiments of the present invention are not limited to the specific examples, and those skilled in the art can set them as desired.
The judging process can be sequentially carried out according to the distance from the unmanned mining vehicle. The pixel ratio may be determined starting from the perspective rectangular subregion nearest to the mining unmanned vehicle.
And when the pixel ratio is greater than or equal to the threshold value, judging the perspective rectangular subarea as a safety area, and continuously judging the perspective rectangular subarea which is adjacent to the perspective matrix subarea and far away from the mining unmanned vehicle. That is, if it is currently judged that it is the perspective rectangular sub-area 1, then the judgment is continued for the perspective rectangular sub-area 2 adjacent thereto, and so on.
And when the pixel ratio is smaller than the threshold value, judging the perspective rectangular subarea as an unsafe area, and determining the perspective rectangular subarea adjacent to the perspective rectangular subarea and close to the mining unmanned vehicle and the perspective rectangular subarea before the perspective rectangular subarea to form a safe area. That is, for example, if it is currently judged that the perspective rectangular sub-area 9 (10 perspective rectangular sub-areas in total), the area constituted by the perspective rectangular sub-areas 1 to 8 is a safe area.
The judging process of the embodiment of the invention is carried out on the sub-areas to be detected one by one, and the judging process is stopped until unsafe areas are determined or all the sub-areas to be detected are judged.
Therefore, the embodiment of the invention can divide the to-be-detected area into a plurality of to-be-detected sub-areas, analyze the to-be-detected sub-areas one by one and finally determine the range of the safety area, namely, can tell the safety within a certain meter in front of the vehicle.
Fig. 4 is a flowchart of a security area detection method according to an embodiment of the invention. In this example, the secure enclave detection method includes an offline portion and an online portion.
The offline part comprises the following steps:
s101, calibrating a camera. The camera is positioned directly in the middle of the mining truck head. The example takes the position right below the center of the mine car head (namely, the projection point V of the center point of the front side of the mine car head on the ground) as the origin O of coordinates of a first right-angle coordinate system, takes the running direction of the vehicle as the positive direction of the y axis, takes the horizontal right direction perpendicular to the running direction of the vehicle as the positive direction of the x axis, determines the direction of the z axis according to the right hand rule, and takes the horizontal ground as the zero point. The example uses the opencv checkerboard method to calibrate the camera, and obtains the internal parameters, external parameters and distortion coefficients of the camera after the calibration is completed.
S102, sorting a mining area pavement data set. For extraction of mine pavement under the camera, this example employs a deep learning segmentation method. The training of the deep learning segmentation model requires a large amount of data, and the step utilizes a camera on a mine card to collect a large amount of mine picture data and conduct arrangement. The arrangement includes viewing the dataset, deleting some problematic pictures, such as incomplete pictures, too blurred, low resolution, etc.
S103, marking data. On the road surface data collected above, the road surface is framed with a closed curve, designated road, using Labelme tools, and other categories are not of interest here.
S104, training a pavement segmentation model. This example is based on pytorch platform, training DeeplabV3+ based road segmentation model. The parameters of the deep learning training process are as follows: the input data is an image with the resolution of 1920 x 1080, and the output data is an array with the resolution of 1920 x 1080; the number of channels is set to 1, and the value inside is True or False, wherein True represents a road surface and False represents a non-road surface.
S105, adjusting the region to be detected according to the steering angle delta. The width of mine car used in this example is 3.3 m, and the rectangular area with width of 3.4 m and length of 20m in front of vehicle is used as the area to be detected, i.e. the area of x-1.7,1.7 and y 0,20 under the first right-angle coordinate system. When the vehicle is turned, the rectangular area described above needs to be adjusted according to the steering angle δ. Fig. 3 shows a structure diagram of adjustment of the region to be inspected in a plan view of the vehicle. The gray shaded area in the figure represents a vehicle in which the head length is H, the axle length is L, and the vehicle width is d. The point P is the coordinate origin of the second rectangular coordinate system when the vehicle is inferred, the point O is the coordinate origin of the first rectangular coordinate system, and the projection point V of the central point on the front side of the vehicle head on the ground coincides with the point O when the vehicle is not turned. Firstly, taking a point P as a coordinate origin, and deducing a position relationship; and then mapped back into the first rectangular coordinate system with the O-point as the origin of coordinates. When the vehicle turns at the delta angle, the original set 20 meters right in front is adjusted to be 20 meters in arc length as shown in fig. 3, and the projection point of the center point of the front side of the vehicle head on the ground moves from the V point to the C point. Three circles with the point P as the center and PB, PC and PD as the radius are concentric circles. BC. CD length is d/2. To simplify the derivation, the left lower point, the middle lower point and the right lower point of the head of the vehicle are B, C, D on the same straight line when the vehicle turns, as shown by the straight line PDCB in FIG. 3. When the vehicle is steered at a steering angle delta, the set vehicle visible region is within a region surrounded by an arc length NB, a straight line BS and a straight line SN. Since the straight line NB is an circumscribed line of the outer circle, MB is parallel to the x-axis of the first rectangular coordinate system, it must be within the rectangular MBSN in the visible region. The steering angle δ, the head length H, the axle length L, and the vehicle width d are known. According to the above embodiment, the coordinates of the third vertex N in the first rectangular coordinate system can be determined as The coordinates of the first vertex B in the first right angle coordinate system are/>/>The coordinates of the fourth vertex S in the first rectangular coordinate system are/> The coordinates of the second vertex M in the first rectangular coordinate system are/>
The online part comprises the following steps:
s201 acquires a camera picture. For example, the camera picture format may be jpg format.
S202, loading camera internal parameters, external parameters and distortion coefficients. The step needs to load the camera internal parameters, external parameters and distortion coefficients calibrated in the step S101.
S203 loads the road surface segmentation model. Here, the road surface segmentation model trained in step S104 is loaded.
S204, road surface segmentation. And (3) obtaining a camera picture in the step (S201), and sending the camera picture into the pavement segmentation model in the step (S203) to obtain a Mask picture of pavement segmentation, and further correcting the Mask picture by using expansion and corrosion operations.
S205, mapping the region to be detected into the Mask image. When the vehicle is straight, a quadrangular region (rectangle) with an x-axis of-1.7 meters to 1.7 meters, a y-axis of 0 meters to 20 meters and a z-axis of 0 in front of the vehicle is defined as a region to be inspected under a first right-angle coordinate system. When the vehicle turns, a quadrilateral NMBS (trapezoid) needs to be obtained according to the mapping relation given in step S105 according to the turning angle δ, so as to obtain the region to be detected. The region to be detected is equally divided into ten parts according to the y axis, 10 quadrilateral regions (sub regions to be detected) are obtained from near to far, and coordinate points of four vertexes of each sub region to be detected are sequentially mapped into an image by utilizing projectPoints functions in opencv and combining camera internal parameters, external parameters and distortion coefficients calibrated in S101. Fig. 5 shows the result of sequentially mapping coordinate points of four vertices of each sub-region to be inspected into an image. The black quadrilateral frame represents the sub-areas 4 to 10 to be detected, and the sub-areas 1 to 3 to be detected cannot be displayed due to the blind areas of the cameras; the area outlined by the white line represents the segmented road surface area.
S206 performs perspective transformation on the multi-layer region. And (3) obtaining perspective transformation matrixes by utilizing the getPerspectiveTransform function provided in openc one by one for the 10-layer region provided in the S205, performing perspective transformation by utilizing the WARPPERSPECTIVE function, and sequentially transforming the sub-regions to be detected into rectangles to obtain perspective rectangular regions 1 to 10, so that the S207 is convenient to judge whether each perspective rectangular region is a safe region or not.
S207 acquires a security area. And (3) sequentially binarizing the perspective rectangular area 1 to the perspective rectangular area 10 in S206, wherein white pixels are road surface areas and black pixels are non-road surface areas, counting the ratio of the white pixels in the areas to the pixels of the whole area, if the ratio of a certain area is larger than a set threshold (for example, 0.98), indicating that the current area is a safe area, and iterating to the next area to calculate the ratio. If the ratio is smaller than the set threshold, the road surface of the area is incomplete or has an obstacle, and the current area needs to be fed back as an unsafe area, so that the upper-level area is considered as a safe area, and the iteration is stopped. In case all the sub-areas to be inspected are safe, it is indicated that the entire area set in front of the vehicle is safe.
By the method, the safety area in front of the mining unmanned vehicle can be detected, and the steering angle can be adjusted along with the change of the steering wheel angle. Fig. 6 shows the perspective area of frame 210 and the alarm information. The perspective rectangular area 9 and the perspective rectangular area 10 are the results of binarization processing after perspective transformation of the perspective rectangular areas 9 and 10 in fig. 5 in sequence. White pixels in the perspective areas 9 and 10 represent road surface areas, black pixels represent non-road surface areas, and the proportion of the white pixels in the whole area is counted to obtain the confidence. Wherein the alarm information is displayed: because the confidence coefficient of the area 10 is lower than 0.488 and is not a safe area, returning to the previous stage to obtain the area 9 is a safe area, the confidence coefficient is 0.989, the safe area corresponding to the area 9 is 18 meters, and the current steering angle of the steering wheel is 3 degrees to the right.
In an embodiment of the invention, a safety area detection device is also provided. The safety area detection device may implement the safety area detection method according to any one of the above embodiments.
As shown in fig. 7, the safety area detection apparatus 100 includes a camera 10, a segmentation module 20, a determination module 30, and a detection module 40.
The camera 10 is configured to acquire road image data. The camera 10 may be acquired in real time, or may be acquired in advance and stored as a picture or video. The acquisition process of the camera 10 may be performed by manual triggering or may be performed automatically. The image data may be in a picture format or in the form of a video stream.
The segmentation module 20 is communicatively connected to the camera 10. The segmentation module 20 is configured to segment road image data from the camera 10 to segment road areas and non-road areas, and obtain Mask images of the road segmentation. The specific dividing process is referred to the above embodiments, and will not be described herein.
The determination module 30 is configured to determine a region to be inspected, which is located in front of the mining unmanned vehicle, based on parameters of the mining unmanned vehicle, the region to be inspected having a quadrilateral shape. The specific process of determining the area to be inspected refers to the above embodiment, and will not be described herein.
The detection module 40 is communicatively coupled to the segmentation module 20 and the determination module 30, respectively. The detection module 40 is configured to divide the region to be detected into a first number of sub-regions to be detected, map information of each sub-region to be detected into a Mask image, perform perspective transformation on the Mask image mapped with the information of the region to be detected, obtain a perspective rectangular region, the perspective rectangular region includes a perspective rectangular sub-region corresponding to each sub-region to be detected, perform binarization processing on the perspective rectangular region, and detect a safety region based on the binarized perspective rectangular sub-region. Specific processes refer to the above embodiments, and will not be described herein.
The safety zone detection method and the safety zone detection device according to the present invention have at least one of the following advantages:
(1) The safety area detection method and the safety area detection device realize the detection of the safety area in front of the vehicle based on the combination of pavement segmentation and multilayer perspective transformation, and can inform the safety of the front of the vehicle within a certain meter;
(2) The safety area detection method and the safety area detection device can adjust the area to be detected based on the steering condition of the vehicle, so as to provide a more accurate safety area detection result for the steering of the vehicle;
(3) The safety area detection method and the safety area detection device can analyze and process in real time, and the safety detection efficiency is improved;
(4) The safety area detection method and the safety area detection device can provide accurate detection results for the car control and planning module, so that the car control and planning module can conveniently adjust the route and speed of a mine car, and the operation safety of a mining area is improved.
Although a few embodiments of the present general inventive concept have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the claims and their equivalents.

Claims (12)

1. The safety area detection method is suitable for the mining unmanned vehicle and is characterized by comprising the following steps of:
collecting road image data through a camera positioned on a mining unmanned vehicle;
Road surface segmentation is carried out on the acquired road image data so as to segment a road surface area and a non-road surface area, and a Mask image of road surface segmentation is obtained;
determining a region to be detected positioned in front of the mining unmanned vehicle based on parameters of the mining unmanned vehicle, wherein the shape of the region to be detected is quadrilateral;
dividing the to-be-detected area into a first number of to-be-detected sub-areas, and mapping the information of each to-be-detected sub-area into Mask images;
Performing perspective transformation on the Mask image mapped with the information of the to-be-detected areas, and obtaining perspective rectangular areas, wherein the perspective rectangular areas comprise perspective rectangular sub-areas corresponding to each to-be-detected sub-area;
Binarizing the perspective rectangular area, performing safety detection based on the binarized perspective rectangular sub-area,
Parameters of the mining unmanned vehicle include the width of the vehicle, the running speed and the steering condition of the vehicle,
Determining a region to be inspected in front of the mining unmanned vehicle based on a driving condition of the mining unmanned vehicle includes:
When the mining unmanned vehicle does not turn, determining a rectangular to-be-detected area in front of the mining unmanned vehicle based on the width and the running speed of the vehicle, wherein the rectangular to-be-detected area comprises a first side and a second side which are perpendicular to each other, the length of the first side is smaller than or equal to that of the second side, a projection line of the front side of the head of the mining unmanned vehicle on the ground is positioned on the first side, and the center point of the projection line coincides with the center point of the first side;
when the mining unmanned vehicle turns at a turning angle delta, a trapezoid to-be-detected area is determined based on the width, the running speed and the turning angle delta of the vehicle, the trapezoid to-be-detected area comprises two parallel third sides and a fourth side, and a projection line of the front side of the mining unmanned vehicle on the ground is positioned on one side of the third sides and the fourth side.
2. The method for detecting a safe area according to claim 1, wherein,
Constructing a three-dimensional first right-angle coordinate system, taking a projection point V of a central point of the front side of the head of the mining unmanned vehicle on the ground as a coordinate origin O, taking the driving direction of the mining unmanned vehicle as a positive y-axis direction, taking a horizontal right direction perpendicular to the driving direction of the vehicle as a positive x-axis direction, determining a z-axis according to a right-hand rule, taking the horizontal ground as a zero point, wherein the mining unmanned vehicle is symmetrical about the y-axis of the first right-angle coordinate system,
When the to-be-detected area is a rectangular to-be-detected area, the first side where the projection line is located is positioned on the x-axis of the first right-angle coordinate system, the second side of the rectangular to-be-detected area is parallel to the y-axis of the first right-angle coordinate system,
When the to-be-detected area is a trapezoid to-be-detected area, the side of the trapezoid to-be-detected area where the projection line is located on the x-axis of the first right-angle coordinate system.
3. The method for detecting a safe area according to claim 2, wherein,
The area to be inspected of the trapezoid is surrounded by a first vertex B, a second vertex M, a third vertex N and a fourth vertex S, the third vertex N and the fourth vertex S form a third side, the second vertex M and the first vertex B form a fourth side, the third side is positioned on the x axis,
The first vertex B is determined based on a first reference point P, a second reference point C, a steering angle delta and the width of the mining unmanned vehicle, the first reference point P is determined based on the length of the mining unmanned vehicle, a projection point K of the left lower corner of the vehicle head on the ground and the steering angle delta, the second reference point C is determined based on a projection point V of the center point of the front side of the vehicle head on the ground, the first reference point P and the length of a second side of a rectangular to-be-detected area determined when the mining unmanned vehicle is not steering;
the third vertex N is determined based on the first vertex B, the first reference point P and the x-axis of the first right-angle coordinate system;
the second vertex M is determined based on the first vertex B and the third vertex N;
The fourth vertex S is determined based on the first vertex B, the second reference point P and the x-axis of the first rectangular coordinate system.
4. The method for detecting a safe area according to claim 3, wherein,
The length of the mining unmanned vehicle is the sum of the length H of the slave head and the length L of the axle, the width of the mining unmanned vehicle is d,
The first reference point P is determined based on the following steps:
Constructing a right triangle, wherein a projection point K of the left lower corner of the head of the mining unmanned vehicle on the ground and a midpoint J of the left rear wheel of the mining unmanned vehicle are right-angle sides KJ of the right triangle, so that the length of the right-angle sides KJ is the sum of the length H of the head and the length L of an axle, and the length of the right-angle sides KJ is the sum of the length H of the head and the length L of the axle The angle of KPJ is equal to the angle of steering angle δ;
The second reference point C is determined based on the following steps:
Connecting a first reference point P with a projection point V of the front side of the vehicle head on the ground to form a line segment PV, constructing a first circle by taking the first reference point P as a circle center and taking the length of the line segment PV as a radius, and determining a second reference point C on the first circle, so that the arc length of a first arc VC formed by the projection point V and the second reference point C is equal to the length of a second side of a to-be-detected area of a rectangle determined when the mining unmanned vehicle is not turned;
The first vertex B is determined based on the following steps:
Connecting the first reference point P and the second reference point C to form a straight line PC, determining a first vertex B on the straight line PC such that the length of the line segment BC is equal to half the width of the mining unmanned vehicle, and the second reference point C is located between the first reference point P and the first vertex B;
the third vertex N is determined based on the following steps:
Constructing a second circle by taking a first reference point P as a circle center and a line segment PB as a radius, so that the second circle intersects with an x-axis of a first right-angle coordinate system, and the obtained intersection point far from the headstock is a third vertex N;
the fourth vertex S is determined based on the following steps:
intersecting the line segment PB with the x-axis of the first right-angle coordinate system, wherein the obtained intersection point is a fourth vertex S;
The second vertex M is determined based on the following steps:
and (3) making an external tangent line of the second circle through the third vertex N, making a line parallel to the x axis of the first right-angle coordinate system through the first vertex B, so that the external tangent line intersects with the line parallel to the x axis of the first right-angle coordinate system, and obtaining an intersection point which is the second vertex M.
5. The method for detecting a safe area according to claim 4, wherein,
The projection point V, the third vertex N and the fourth vertex S are all located on the x-axis of the first rectangular coordinate system,
Constructing a second rectangular coordinate system, wherein the second rectangular coordinate system takes a first reference point P as a coordinate origin, the x-axis of the second rectangular coordinate system is parallel to the x-axis of the first rectangular coordinate system, the y-axis of the second rectangular coordinate system is parallel to the y-axis of the first rectangular coordinate system, the midpoint J of the left rear wheel and the center point Q of the rear wheels on the two sides of the mining unmanned vehicle are both positioned on the x-axis of the second rectangular coordinate system,
The position coordinates of the first vertex B, the second vertex M, the third vertex N and the fourth vertex S under the second rectangular coordinates are respectively determined based on the midpoint J of the left rear wheel, the center point Q of the rear wheels at the two sides, the second reference point P and the steering angle delta;
The position coordinates of the first vertex B, the second vertex M, the third vertex N, and the fourth vertex S in the first rectangular coordinates are determined based on the position coordinates of the first vertex B, the second vertex M, the third vertex N, and the fourth vertex S in the second rectangular coordinates.
6. The method for detecting a safe area according to claim 5, wherein,
The coordinates of the first vertex B under the second rectangular coordinate system are
The coordinates of the second vertex M in the second rectangular coordinate system are
The coordinates of the third vertex N in the second rectangular coordinate system are
The coordinates of the fourth vertex S under the second rectangular coordinate system are
Wherein the method comprises the steps of
The first vertex B has the following coordinates in a first right-angle coordinate system
The coordinates of the second vertex M in the first right angle coordinate system are
The coordinates of the third vertex N in the first right-angle coordinate system are
The coordinates of the fourth vertex S in the first right-angle coordinate system are
7. The method for detecting a safe area according to any one of claims 1 to 6, wherein,
The shape of each to-be-detected sub-area is the same as that of the to-be-detected area;
mapping the information of each sub-area to be detected into the Mask image comprises the following steps: and mapping the position information of the four vertexes of each subarea to be detected into a Mask image based on the internal parameters, the external parameters and the distortion coefficients of the camera.
8. The method for detecting a safe area according to claim 7, wherein,
Dividing the rectangular region to be detected into a first number of sub-regions to be detected on the y-axis of a first right-angle coordinate system according to a first number, wherein each sub-region to be detected is rectangular;
Dividing the trapezoid to-be-detected area into a first number of to-be-detected subareas on the y-axis of a first right-angle coordinate system according to a first number, wherein each to-be-detected subarea is trapezoid;
The first number of sub-areas to be detected are sequentially arranged according to the distance between the sub-areas and the mining unmanned vehicle;
And mapping the position information of the four vertexes of each sub-area to be detected into a Mask image based on the internal parameters, the external parameters and the distortion coefficients of the camera through projectPoints functions, and obtaining the corresponding sub-area to be detected in the Mask image.
9. The method for detecting a safe area according to claim 8, wherein,
Performing perspective transformation on the Mask image mapped with the information of the region to be detected, and obtaining a perspective rectangular region, wherein the method comprises the following steps:
Obtaining a perspective transformation matrix through getPerspectiveTransform functions based on Mask images mapped with information of the region to be detected;
And sequentially converting the corresponding sub-regions to be detected in the Mask image into perspective rectangular sub-regions through WARPPERSPECTIVE functions based on the perspective transformation matrix.
10. The method of claim 9, wherein,
The binarized perspective rectangular area includes a white pixel portion representing a road surface area and a black pixel portion representing a non-road surface area,
The safety detection based on the binarized perspective rectangular subarea comprises the following steps:
determining the pixel ratio of the white pixel part in a perspective rectangular subarea in the perspective rectangular subarea;
And determining the size relation between the pixel ratio and a threshold value, and judging whether the perspective rectangular subarea is a safe area or not based on the size relation.
11. The method of claim 10, wherein,
The pixel ratio is determined starting from the nearest perspective rectangular subregion to the mining unmanned vehicle,
When the pixel ratio is greater than or equal to the threshold value, judging the perspective rectangular subarea as a safety area, and continuously judging the perspective rectangular subarea which is adjacent to the perspective rectangular subarea and far away from the mining unmanned vehicle;
and when the pixel ratio is smaller than the threshold value, judging the perspective rectangular subarea as an unsafe area, and determining the perspective rectangular subarea adjacent to the perspective rectangular subarea and close to the mining unmanned vehicle and the perspective rectangular subarea before the perspective rectangular subarea to form a safe area.
12. The method of claim 11, wherein,
The safety area detection method further comprises the following steps:
And loading reference data, wherein the reference data comprises internal parameters, external parameters, distortion coefficients and a pavement segmentation model of the camera.
CN202311292462.0A 2023-10-08 2023-10-08 Safety area detection method and safety area detection device Active CN117274939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311292462.0A CN117274939B (en) 2023-10-08 2023-10-08 Safety area detection method and safety area detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311292462.0A CN117274939B (en) 2023-10-08 2023-10-08 Safety area detection method and safety area detection device

Publications (2)

Publication Number Publication Date
CN117274939A CN117274939A (en) 2023-12-22
CN117274939B true CN117274939B (en) 2024-05-28

Family

ID=89212003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311292462.0A Active CN117274939B (en) 2023-10-08 2023-10-08 Safety area detection method and safety area detection device

Country Status (1)

Country Link
CN (1) CN117274939B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996507A (en) * 1995-09-29 1997-04-08 Aisin Seiki Co Ltd Detection apparatus for on-road line in front of vehicle
JP2009271766A (en) * 2008-05-08 2009-11-19 Hitachi Ltd Obstacle detection device for automobile
JP2018007037A (en) * 2016-07-01 2018-01-11 株式会社ニコン Imaging apparatus and automobile
CN107577996A (en) * 2017-08-16 2018-01-12 中国地质大学(武汉) A kind of recognition methods of vehicle drive path offset and system
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
JP2018136890A (en) * 2017-02-24 2018-08-30 株式会社トヨタマップマスター Road network change detection apparatus, road network change detection method, computer program, and recording medium recording the computer program
CN112115889A (en) * 2020-09-23 2020-12-22 成都信息工程大学 Intelligent vehicle moving target detection method based on vision
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
CN112731925A (en) * 2020-12-21 2021-04-30 浙江科技学院 Conical barrel identification and path planning and control method for unmanned formula racing car
WO2022170633A1 (en) * 2021-02-15 2022-08-18 苏州优它科技有限公司 Rail transit vehicle collision avoidance detection method based on vision and laser ranging
CN115797640A (en) * 2023-02-13 2023-03-14 北京路凯智行科技有限公司 Road boundary extraction method for strip mine area

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4874280B2 (en) * 2008-03-19 2012-02-15 三洋電機株式会社 Image processing apparatus and method, driving support system, and vehicle
SG11202108455PA (en) * 2019-02-28 2021-09-29 Shenzhen Sensetime Technology Co Ltd Vehicle intelligent driving control method and apparatus, electronic device and storage medium
CN110667576B (en) * 2019-10-18 2021-04-20 北京百度网讯科技有限公司 Method, apparatus, device and medium for controlling passage of curve in automatically driven vehicle
JP7478570B2 (en) * 2020-03-30 2024-05-07 本田技研工業株式会社 Vehicle control device
CN116368540A (en) * 2020-07-16 2023-06-30 御眼视觉技术有限公司 System and method for dynamic road geometry modeling and navigation
CN115100622B (en) * 2021-12-29 2023-09-22 中国矿业大学 Method for detecting driving area of unmanned transportation equipment in deep limited space and automatically avoiding obstacle

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996507A (en) * 1995-09-29 1997-04-08 Aisin Seiki Co Ltd Detection apparatus for on-road line in front of vehicle
JP2009271766A (en) * 2008-05-08 2009-11-19 Hitachi Ltd Obstacle detection device for automobile
JP2018007037A (en) * 2016-07-01 2018-01-11 株式会社ニコン Imaging apparatus and automobile
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
JP2018136890A (en) * 2017-02-24 2018-08-30 株式会社トヨタマップマスター Road network change detection apparatus, road network change detection method, computer program, and recording medium recording the computer program
CN107577996A (en) * 2017-08-16 2018-01-12 中国地质大学(武汉) A kind of recognition methods of vehicle drive path offset and system
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
CN112115889A (en) * 2020-09-23 2020-12-22 成都信息工程大学 Intelligent vehicle moving target detection method based on vision
CN112731925A (en) * 2020-12-21 2021-04-30 浙江科技学院 Conical barrel identification and path planning and control method for unmanned formula racing car
WO2022170633A1 (en) * 2021-02-15 2022-08-18 苏州优它科技有限公司 Rail transit vehicle collision avoidance detection method based on vision and laser ranging
CN115797640A (en) * 2023-02-13 2023-03-14 北京路凯智行科技有限公司 Road boundary extraction method for strip mine area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种提高视频车速检测精度的算法分析和实现;孙宁 等;合肥工业大学学报(自然科学版);20141228(第12期);1462-1467+1527 *
基于车载视觉的驾驶员后视镜查看行为检测;黄波 等;图学学报;20180615(第03期);477-484 *
智能车可行驶区域建模及驾驶辅助信息显示技术研究;武馨宇;中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑);20191231;C035-204 *
面向无人驾驶的城市道路可行驶区域检测技术研究;李博一;中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑);20230331;C035-250 *

Also Published As

Publication number Publication date
CN117274939A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
DE102020112314A1 (en) VERIFICATION OF VEHICLE IMAGES
CN108596058A (en) Running disorder object distance measuring method based on computer vision
WO2018020954A1 (en) Database construction system for machine-learning
CN102222236A (en) Image processing system and position measurement system
CN110197173B (en) Road edge detection method based on binocular vision
JP7481810B2 (en) Method for reconstructing 3D vehicle images
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN111967360A (en) Target vehicle attitude detection method based on wheels
CN107284455A (en) A kind of ADAS systems based on image procossing
DE102021101270A1 (en) TRAINING A NEURAL NETWORK OF A VEHICLE
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
DE102021125592A1 (en) TRAFFIC CAMERA CALIBRATION
CN114067287A (en) Foreign matter identification and early warning system based on vehicle side road side data perception fusion
CN117274939B (en) Safety area detection method and safety area detection device
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
CN112634354B (en) Road side sensor-based networking automatic driving risk assessment method and device
CN113064415A (en) Method and device for planning track, controller and intelligent vehicle
CN116486351A (en) Driving early warning method, device, equipment and storage medium
Lin et al. Adaptive inverse perspective mapping transformation method for ballasted railway based on differential edge detection and improved perspective mapping model
WO2022133986A1 (en) Accuracy estimation method and system
CN114926729A (en) High-risk road section identification system and method based on driving video
Huang et al. Rear obstacle warning for reverse driving using stereo vision techniques
CN116057578A (en) Modeling vehicle environment using cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant