CN113963330A - Obstacle detection method, obstacle detection device, electronic device, and storage medium - Google Patents

Obstacle detection method, obstacle detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN113963330A
CN113963330A CN202111227555.6A CN202111227555A CN113963330A CN 113963330 A CN113963330 A CN 113963330A CN 202111227555 A CN202111227555 A CN 202111227555A CN 113963330 A CN113963330 A CN 113963330A
Authority
CN
China
Prior art keywords
target
dimensional
position information
image
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111227555.6A
Other languages
Chinese (zh)
Inventor
李�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Kunpeng Jiangsu Technology Co Ltd
Original Assignee
Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Kunpeng Jiangsu Technology Co Ltd filed Critical Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority to CN202111227555.6A priority Critical patent/CN113963330A/en
Publication of CN113963330A publication Critical patent/CN113963330A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method and a device for detecting obstacles, electronic equipment and a storage medium, wherein the method comprises the following steps: carrying out segmentation processing on the received image to be processed based on an image semantic segmentation algorithm to obtain an image to be detected; determining at least one two-dimensional target detection frame in an image to be detected and information of positions to be converted of the two-dimensional target detection frames; obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix; based on each target position information, a target obstacle is determined. This technical scheme has solved and can't effectively detect to low barrier, and then has leaded to the problem that can't realize effectively keeping away the barrier, has realized effectively detecting low barrier, and then keeps away the barrier based on the barrier that detects, improves the technical effect of unmanned vehicles driving safety nature.

Description

Obstacle detection method, obstacle detection device, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the technical field of unmanned vehicles, in particular to a method and a device for detecting obstacles, electronic equipment and a storage medium.
Background
The unmanned vehicle is provided with a laser radar and a camera device. Based on the camera device and the laser radar, the three-dimensional point cloud can be determined, and whether the obstacle exists or not can be determined according to the three-dimensional point cloud.
When the present invention is implemented based on the above-described embodiments, the inventors have found that the following problems occur:
the unmanned distribution vehicle has the characteristics of low chassis, high gravity center and poor obstacle crossing capability, and the point cloud data is sparse, so that obstacles close to a road surface, low in height and small in size cannot be detected, and the unmanned vehicle cannot avoid obstacles.
Disclosure of Invention
The invention provides an obstacle detection method, an obstacle detection device, electronic equipment and a storage medium, and aims to achieve the technical effects of comprehensiveness and convenience in detection of obstacles on a road.
In a first aspect, an embodiment of the present invention provides an obstacle detection method, which is applied to an unmanned vehicle, and includes:
carrying out segmentation processing on the received image to be processed based on an image semantic segmentation algorithm to obtain an image to be detected;
determining at least one two-dimensional target detection frame in the image to be detected and information of positions to be converted of the two-dimensional target detection frames;
obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix;
based on each target position information, a target obstacle is determined.
In a second aspect, an embodiment of the present invention further provides an obstacle detection apparatus, where the apparatus is configured on an unmanned vehicle, and the apparatus includes:
the to-be-detected image determining module is used for segmenting the received to-be-processed image based on an image semantic segmentation algorithm to obtain the to-be-detected image; the to-be-converted position information determining module is used for determining at least one two-dimensional target detection frame in the to-be-detected image and the to-be-converted position information of each two-dimensional target detection frame;
a target location information determination module for
Obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix;
and the obstacle determining module is used for determining the target obstacle based on the target position information.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the obstacle detection method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the obstacle detection method according to any one of the embodiments of the present invention.
The technical scheme of the embodiment of the invention comprises the steps of carrying out segmentation processing on a received image to be processed through a semantic segmentation algorithm to obtain the image to be detected, distinguishing a road area and a non-road area in the image to be detected, further determining a two-dimensional target detection frame positioned in the road area, wherein the two-dimensional target detection frame can be a low obstacle, processing the information of the position to be converted of the two-dimensional target detection frame to obtain a three-dimensional target detection frame corresponding to the two-dimensional target detection frame in order to determine the specific position of the low obstacle, solving the technical problem that effective obstacle avoidance cannot be realized due to the fact that effective detection cannot be carried out on the low obstacle in the prior art, realizing the detection of the low obstacle in the road, carrying out path planning according to the detected low obstacle and further achieving the effect of effective obstacle avoidance, meanwhile, the technical effect of driving safety of the unmanned vehicle is also improved.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a schematic flow chart of a method for detecting an obstacle according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an obstacle detection method according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of a method for detecting an obstacle according to a third embodiment of the present invention;
fig. 4 is a schematic coordinate diagram according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an obstacle detection device according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flow chart of an obstacle detection method according to an embodiment of the present invention, where the present embodiment is applicable to a situation where an obstacle on an unmanned vehicle driving road is detected, and the method may be executed by an obstacle detection apparatus, and the apparatus may be implemented in the form of software and/or hardware, where the hardware may be an electronic device, and the electronic device may be a mobile terminal, a PC terminal, or the like.
As shown in fig. 1, the method includes:
s110, carrying out segmentation processing on the received image to be processed based on an image semantic segmentation algorithm to obtain the image to be detected.
Wherein the camera device may be provided on an unmanned vehicle. An image directly captured by the imaging device may be used as the image to be processed. For example, during the driving of the unmanned vehicle, the driving environment information may be recorded based on the camera device, and each video frame recorded may be used as an image to be processed. The recorded video frames mainly include driving road information. Correspondingly, the image to be processed may include pedestrians, vehicles, roads, sky, buildings, and the like. The image semantic segmentation algorithm can be understood as an algorithm for classifying and marking each pixel point in an image. And taking the image to be processed which is processed based on the image semantic segmentation algorithm as the image to be detected. That is to say, each pixel point in the image to be detected is classified and marked. The size of the image to be detected is the same as that of the image to be processed.
Specifically, during the driving of the unmanned vehicle, the driving road information may be captured based on the image capturing device, and the captured driving road information may be used as the image to be processed. Meanwhile, each pixel point in the image to be processed can be processed based on the image semantic segmentation algorithm, and the image to be detected for classifying and marking each pixel point can be obtained.
In this embodiment, the segmenting the received image to be processed based on the image semantic segmentation algorithm to obtain the image to be detected includes: classifying and marking each pixel point in the image to be processed based on the image semantic segmentation algorithm to obtain the image to be detected; and the to-be-detected image comprises the category mark of each pixel point.
Each element in the unmanned vehicle driving road or the environment to which the road belongs can be classified in advance, and optionally, the elements include sky, roads, buildings, pedestrians and the like. The category of each element may be set in advance. The image semantic segmentation algorithm can determine the category labels corresponding to all pixel points in the image to be processed according to the element categories marked in advance. Namely, based on the image semantic segmentation algorithm, category marking can be performed on each pixel point in the image to be processed, and the image after category marking is used as the image to be detected.
In this embodiment, the image to be processed is segmented based on the image semantic segmentation algorithm, so that convenience in determining the road region and the non-road region can be improved.
S120, determining at least one two-dimensional target detection frame in the image to be detected and position information to be converted of each two-dimensional target detection frame.
The two-dimensional target detection frame may be a detection frame of a road area in the image to be detected. The method can be used for clustering connected domains of an image to be detected to obtain a plurality of connected domains, and a minimum rectangular frame surrounding the connected domains can be used as a two-dimensional detection frame. That is, the image to be detected includes a plurality of two-dimensional detection frames. The two-dimensional detection frame may be located in a road area or a non-road area, and the two-dimensional detection frame located in the road area may be used as the two-dimensional target detection frame. The two-dimensional object detection frame may be an obstacle in the road area. In order to realize the effect of effectively avoiding obstacles in the driving process of the unmanned vehicle, the three-dimensional detection frame corresponding to the two-dimensional target detection frame and the corresponding coordinate information can be further determined. And taking the coordinate information of the two-dimensional target detection frame as the position information to be converted.
Specifically, a coordinate system may be established in advance with the top left vertex of the image to be processed as the origin. According to the road area coordinates in the image to be detected and the detection frame coordinates of the two-dimensional detection frame, the two-dimensional detection frame located in the road area can be determined and used as the two-dimensional target detection frame, and meanwhile, the coordinate information of the two-dimensional target detection frame can be used as the position information to be converted.
In this embodiment, the determining at least one two-dimensional target detection frame in the image to be detected and the information of the position to be converted of each two-dimensional target detection frame includes: determining a road region and a non-road region in the image to be detected based on a preset category mapping relation to obtain a binary image; the category mapping relationship comprises a corresponding relationship between a category mark of a pixel point and a pixel value; and denoising and connected domain processing the binary image to obtain at least one two-dimensional target detection frame and position information to be converted of the two-dimensional target detection frame.
Optionally, when the category label of the pixel point is 1, the corresponding binarization pixel value may be 255, and when the category label of the pixel point is not 1, the corresponding binarization pixel value may be 0.
In this embodiment, the category mark may be a road mark, a blue sky mark, or other marks, the road mark may be represented by 0, and the non-road mark may be represented by non-0, for example, 1. The denoising process can be understood as removing noise points in the binarized image. The connected domain can be understood as a closed region obtained by clustering each connected domain in the binarized image. Determining whether the connected domain is a road region, determining an enclosure frame corresponding to the connected domain, and determining a two-dimensional target detection frame located in the road region according to the coordinates of the enclosure frame and the coordinates of the road region. A bounding box corresponding to each connected domain is determined, which may be used as a detection box. The connected domain can be located in a road area or a non-road area, and the detection frame located in the road area is used as a two-dimensional target detection frame. Accordingly, the coordinate information of the two-dimensional target detection frame can be used as the position information to be converted.
Specifically, after the image to be detected is obtained, the pixel value corresponding to the category label of each pixel may be determined according to a preset category mapping relationship. At this time, a binarized image can be obtained. The category mapping relation is mainly that pixel values of pixel points located in a road area are mapped to be black, pixel values of pixel points located in a non-road area are corrected to be white, and a black-and-white image is obtained, wherein the road area and the non-road area are distinguished in the black-and-white image. In order to further determine whether the road area includes low obstacles, the binarized image may be denoised first to make the road area neater in the binarized image. After the denoising process, the connected component in the binarized image may be clustered to obtain a plurality of connected components. The maximum value and the minimum value of the horizontal and vertical coordinates can be respectively determined according to the coordinates of each point on the edge line of each connected domain, an axis aligned bounding box is further constructed according to the maximum value and the minimum value, and meanwhile, a road area bounding box can also be constructed according to the maximum value and the minimum value of each point on the edge line of the road area. And determining the surrounding frame positioned in the road area according to the road area surrounding frame and the connected domain surrounding frame, and taking the surrounding frame positioned in the road area as a two-dimensional target detection frame. The center point of the two-dimensional target detection frame, and the length and width of the detection frame may be used as the position information to be converted of the two-dimensional target detection frame.
In this embodiment, the obtaining at least one two-dimensional target detection frame and position information to be converted of the two-dimensional target detection frame by denoising and connected domain processing the binarized image includes: denoising the binarized image to obtain a target binarized image; clustering each connected domain in the target binary image to obtain at least one connected domain to be processed; determining a two-dimensional detection frame of each connected domain to be processed and an area detection frame of a road area in the target binary image; determining a two-dimensional target detection frame and corresponding position information to be converted according to the area detection frame and each two-dimensional detection frame; and the position information to be converted comprises coordinate information of each vertex in the two-dimensional detection frame.
Here, the on operation may be understood as a denoising process. And taking the binary image subjected to the on operation processing as a target binary image. The number of connected domains to be processed may be one or more. The two-dimensional detection box corresponding to each of the to-be-processed connected components may be a bounding box including the corresponding to-be-processed connected component. The coordinate information of each vertex of the two-dimensional detection box may be information in the position information to be converted.
In this embodiment, determining the two-dimensional detection frame and the position information to be converted corresponding to each connected domain to be processed may be: and determining the pixel coordinate of each pixel point in each connected domain to be processed, and determining a two-dimensional detection frame corresponding to each connected domain to be processed and the position information to be converted of each two-dimensional detection frame according to the maximum value and the minimum value of the pixel coordinate.
It can be understood that the UV coordinate system is established with the upper left vertex of the image to be detected as the origin. The uv value of each point on the edge line of the connected domain can be determined, four vertexes can be determined based on the maximum value and the minimum value of uv, and the connecting line of the four vertexes is used as a two-dimensional detection frame corresponding to a connected domain to be processed, that is, the two-dimensional detection frame is a surrounding frame surrounding the connected domain. The coordinate information of the four vertices may be used as the position information to be converted, or the center point, the length, and the width of the two-dimensional detection frame may be used as the information in the position information to be converted.
It should be noted that, if a connected domain exists in the road area, it is indicated that the connected domain may correspond to an obstacle, and therefore, three-dimensional coordinate information corresponding to the connected domain may be determined and output, so as to determine a specific position of the obstacle according to the three-dimensional coordinate information, thereby achieving an obstacle avoidance effect.
In this embodiment, a detection frame surrounding a road area may be used as the area detection frame, and a detection frame including a connected component may be used as the two-dimensional detection frame. Whether each two-dimensional detection frame is located in the area detection frame or not can be determined, if yes, the two-dimensional detection frame is indicated to be possibly an obstacle on a driving road, otherwise, the two-dimensional detection frame is indicated not to be located in the road area, and the obstacle can be ignored.
In this embodiment, the determining the two-dimensional detection frame of each connected domain to be processed includes: determining pixel point coordinates of edge lines of each connected domain to be processed; determining at least one vertex to be processed according to the maximum value and the minimum value of the pixel point coordinates; and taking the vertex coordinates of the at least one vertex to be processed as position information to be converted, and taking an area formed by the connecting lines of the at least one vertex to be processed as a two-dimensional detection frame.
The advantage of determining the two-dimensional target detection frame is that the low barrier in the road area can be determined as soon as possible, so that the technical effect of obstacle avoidance is achieved.
S130, obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix.
The unmanned vehicle runs in a three-dimensional scene, so that after the two-dimensional detection frame is determined, the two-dimensional detection frame can be converted into the three-dimensional detection frame to determine the specific position of the low obstacle in the actual scene, and further the obstacle avoidance effect is achieved. The transformation matrix may be understood as a transformation formula in which two-dimensional coordinates are converted into three-dimensional coordinates. The target position information may be coordinate information of a three-dimensional coordinate system of the three-dimensional detection frame. Each two-dimensional detection frame has a corresponding three-dimensional detection frame.
In this embodiment, the position information to be converted of each two-dimensional target detection frame may be processed based on the transformation matrix, so as to obtain a three-dimensional detection frame corresponding to the two-dimensional target detection frame and target position information of the three-dimensional detection frame.
Optionally, the obtaining, according to the position information to be converted of each two-dimensional target detection frame and the transformation matrix, a three-dimensional detection frame corresponding to each two-dimensional target detection frame and target position information of each three-dimensional detection frame includes: aiming at each vertex to be processed of each two-dimensional target detection frame, determining target coordinates of the vertex to be processed according to the position information to be converted of the vertex to be processed and the transformation matrix; the transformation matrix is determined based on an internal reference calibration matrix and an external reference calibration matrix of the camera device, the vertical distance between the camera device and a horizontal plane and the distance of a coordinate system; the camera device is used for shooting to obtain the image to be processed; and determining the corresponding three-dimensional detection frame and the target position information of the three-dimensional detection frame according to each target coordinate of each two-dimensional target detection frame.
Wherein the transformation matrix is determined based on an internal reference calibration matrix and an external reference calibration matrix of a camera of the camera corresponding to the image to be processed, a vertical distance between the camera and a horizontal plane, and a coordinate system distance. The coordinate system distance can be the distance between the camera coordinate system and the world coordinate system, and the distance value can be a fixed value or a variable value and can be determined in real time.
It should be noted that the installation position of the camera device on the unmanned vehicle is fixed, and accordingly, the vertical distance between the camera device and the horizontal plane is also determined. The internal reference calibration matrix and the external reference calibration matrix are predetermined and optional and can be factory parameters of the camera device.
It should be noted that, for clarity, how to determine the three-dimensional detection boxes corresponding to the two-dimensional target detection boxes may be described by taking the processing of one vertex of one of the detection boxes as an example, and the remaining vertices may perform this operation repeatedly.
Illustratively, a certain vertex P in a known two-dimensional object detection boxi(u, v) the coordinate P of the point in the three-dimensional coordinate system is determinedw=(xw,yw,zw) Wherein z iswKnown as the camera height coordinate in a three-dimensional coordinate system, i.e. the vertical distance of the camera device from the horizontal plane. PiAnd PwThe following equation is satisfied,
Figure BDA0003314715900000101
wherein M isIIs a camera internal reference calibration matrix (known), MEIs an external reference calibration matrix (known) of a camera and a three-dimensional coordinate system, ZcThe following formula can be obtained after substituting the above parameters for the distance between the camera coordinate system and the three-dimensional coordinate system:
Figure BDA0003314715900000102
through the above calculation, the vertex coordinates corresponding to each low-obstacle vertex in the three-dimensional coordinate system, that is, the target coordinates, can be obtained.
And S140, determining the target obstacle based on the target position information.
In this embodiment, the target obstacle may be determined based on each target position information and the history target position information of the history to-be-processed image.
The image to be processed may be the first frame image captured by the imaging device, or may be the nth frame image. If the image is the Nth frame image, the image before the Nth frame image can be used as a historical image to be processed. If the image is the first frame image, the target position information can be recorded and output, so that path planning is performed according to the target position information, and the technical effect of obstacle avoidance is achieved. If not the first frame image, after determining the target location information, it may be determined whether an obstacle is currently avoided in conjunction with historical target location information associated with historical pending images.
Specifically, after the target position information is determined, the target position information can be used as a planning parameter in path planning, and the path planning is performed based on the planning parameter, so that the technical effect of obstacle avoidance is achieved.
The technical scheme of the embodiment of the invention comprises the steps of carrying out segmentation processing on a received image to be processed through a semantic segmentation algorithm to obtain the image to be detected, distinguishing a road area and a non-road area in the image to be detected, further determining a two-dimensional target detection frame positioned in the road area, wherein the two-dimensional target detection frame can be a low obstacle, processing the information of the position to be converted of the two-dimensional target detection frame to obtain a three-dimensional target detection frame corresponding to the two-dimensional target detection frame in order to determine the specific position of the low obstacle, solving the technical problem that effective obstacle avoidance cannot be realized due to the fact that effective detection cannot be carried out on the low obstacle in the prior art, realizing the detection of the low obstacle in the road, carrying out path planning according to the detected low obstacle and further achieving the effect of effective obstacle avoidance, meanwhile, the technical effect of driving safety of the unmanned vehicle is also improved.
Example two
Fig. 2 is a schematic flow chart of an obstacle detection method according to a second embodiment of the present invention, and based on the foregoing embodiments, the determination of a target obstacle according to target position information and historical target position information associated with a historical image to be processed may be further refined. The specific implementation mode can be seen in the detailed description of the technical scheme. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 2, the method includes:
s210, carrying out segmentation processing on the received image to be processed based on an image semantic segmentation algorithm to obtain the image to be detected.
S220, determining at least one two-dimensional target detection frame in the image to be detected and position information to be converted of each two-dimensional target detection frame.
And S230, obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix.
And S240, determining the target barrier according to the target position information and the historical target position information of the historical to-be-processed image.
In this embodiment, if the historical target position information associated with the historical to-be-processed image does not include the target position information, the target position information is recorded in a detection frame list, so that a preset number of target position information are included in the detection frame list, and an object corresponding to the target position information is determined to be a target obstacle.
It is understood that if the target position information is not included in the detection box list, the target position information may be added to the detection box and output so that the algorithm avoids the obstacle according to the updated target position information. Or, if the number of times of detecting the target position information includes a preset number of times, it may be determined that the target position information corresponds to the target obstacle.
In the actual application process, if the historical target position information associated with the historical to-be-processed image comprises the target position information, updating the historical target position information in the detection frame list based on the target position information; and when the detection frame list is detected to comprise a preset number of target position information, determining that the target position information corresponds to the obstacle.
It can be understood that if the historical target position information associated with the historical to-be-processed image includes the target position information, the number of times that the target position information appears continuously can be determined, and if the number of times that the target position information appears continuously is greater than a preset number threshold, it is indicated that the target position information corresponds to the target obstacle.
In practical application, continuous multiple frames of target position information may not appear, and at this time, it may be determined that the obstacle corresponding to the target position information has entered a blind area of the camera device, or the unmanned vehicle has avoided the obstacle. Alternatively, it may be determined whether or not the obstacle has been avoided, based on the actual traveling position of the unmanned vehicle and the target position information.
S240, if the historical target position information associated with the historical to-be-processed image does not include the target position information, a detection frame list is created based on the target position information, the target position information is marked as to-be-determined obstacles, and an object corresponding to the target position information is determined to be the target obstacle when the detection frame list includes a preset number of target position information.
The technical scheme of the embodiment of the invention comprises the steps of carrying out segmentation processing on a received image to be processed through a semantic segmentation algorithm to obtain the image to be detected, distinguishing a road area and a non-road area in the image to be detected, further determining a two-dimensional target detection frame positioned in the road area, wherein the two-dimensional target detection frame can be a low obstacle, processing the information of the position to be converted of the two-dimensional target detection frame to obtain a three-dimensional target detection frame corresponding to the two-dimensional target detection frame in order to determine the specific position of the low obstacle, solving the technical problem that effective obstacle avoidance cannot be realized due to the fact that effective detection cannot be carried out on the low obstacle in the prior art, realizing the detection of the low obstacle in the road, carrying out path planning according to the detected low obstacle and further achieving the effect of effective obstacle avoidance, meanwhile, the technical effect of driving safety of the unmanned vehicle is also improved.
EXAMPLE III
As an alternative embodiment of the foregoing embodiment, fig. 3 is a schematic flow chart of an obstacle detection method according to a third embodiment of the present invention. The technical scheme can be divided into three processes, wherein the first process is an image semantic segmentation process, the second process is a detection frame determination process, namely a detection process, and the third process is a tracking process, namely a process of determining whether a detection frame is an obstacle.
The first image semantic segmentation process may be: various sub-scenes that may appear in the target scene may be preset, and for example, the target scene may be a road region where no-man vehicles travel, a non-road region. The road area and the non-road area may be classified and labeled in advance. Optionally, the to-be-processed image shot by the camera device may be segmented based on an image semantic segmentation algorithm to obtain a category label of each pixel point in the to-be-processed image.
The second process is to determine a detection frame in the image to be processed. An image coordinate system may be established first, for example, with the top left corner of the image to be processed as the origin, the horizontal direction as the U-axis, the vertical direction as the V-axis, and the positive direction as shown in fig. 4. After the to-be-processed image for performing the category marking on each pixel point is obtained, the category to which the pixel point belongs can be determined according to the category marking of each pixel point. The actual semantics of each pixel point can be determined through the category mapping relation. The process of binarization is to classify the result of semantic segmentation into two pixel point values of 0 and 255. It should be noted that: the category mapping relationship may be a correspondence between the category identifier and the pixel value, and the size of the binarized image is consistent with the size of the image to be processed. For example, in the category mapping relationship, if the category identifier 1 represents a road, the corresponding pixel point value may be 255, the category identifier is not 1, and the corresponding pixel point value thereof may be 0. Based on the category mapping relationship, the pixel point value of each pixel point in the image to be processed can be processed to obtain the binary image. At this time, the road and non-road semantics are already distinguished in the binary image.
It should be noted that, the image semantic segmentation algorithm has a certain segmentation accuracy, so as to avoid the problem of occurrence of misclassified points, for example, pixels of road semantics are classified into other categories, which results in sporadic misclassified pixels inside a road region in a binarized image. At the moment, the binary image can be subjected to open operation, the open source operation can eliminate sporadic noise points in the binary image, meanwhile, a correct monitoring result cannot be eliminated, and the accuracy of determining the road area is improved.
In this embodiment, mainly obstacle avoidance is performed on low obstacles inside the road area, and after the binary image is subjected to the opening operation, the pixel value of the road semantic is still 255, and the pixel value of the non-road semantic is 0. Connected domain clustering can be performed on the binary image to obtain a plurality of connected domains. For example, if N connected domains are clustered, N connected domains belonging to a road region may be determined and retained, and N-N connected domains not belonging to a road may be ignored. Wherein the connected domain belonging to the road area or the connected domain of the non-road area can be determined by the coordinates of each connected domain and the coordinates of the road area.
Optionally, whether a connected domain belongs to a road region is determined, the maximum value and the minimum value of the connected domain in the u direction and the V direction may be calculated respectively, and a detection frame including the connected domain may be determined according to the maximum value and the minimum value. If the road area includes the detection box, the connected component is in the road area, and the connected component can be reserved. Meanwhile, each detection frame located in the road area may be taken as a two-dimensional target detection frame.
And the third process is to convert the two-dimensional target detection frame into a three-dimensional detection frame and further determine the barrier. The environment where the unmanned vehicle belongs is mostly a three-dimensional environment, correspondingly, the obstacle is also a three-dimensional obstacle, so that after the two-dimensional target detection frame is obtained, the two-dimensional target detection frame can be processed for accurately determining the position of the obstacle, and the three-dimensional detection frame corresponding to the two-dimensional target detection frame is obtained. That is, only after converting the two-dimensional detection frame into the three-dimensional detection frame, the obstacle information can be output. The technical scheme mainly aims at the low obstacles in the road area, and the coordinates of the low obstacles are known in the height dimension, namely the distance from the installation height of the camera to the ground. With the known information of the dimension, the low-obstacle detection frame in the two-dimensional coordinate system can be converted into a three-dimensional space coordinate system, and optionally, the three-dimensional space coordinate system is an unmanned vehicle coordinate system.
In this embodiment, the step of converting the two-dimensional target detection frame into a three-dimensional detection frame in a three-dimensional coordinate system may be: for the two-dimensional object detection box, how to convert the two-dimensional object detection box into three-dimensional coordinates can be described by taking one vertex as an example. Assuming that the two-dimensional coordinates of one of the vertices of the two-dimensional object detection frame A are represented by Pi, which is (u, v), if the point is marked as P in three-dimensional coordinatesw=(xw,yw,zw) Wherein z iswIs the camera height coordinate in the three-dimensional coordinate system. The three-dimensional coordinates of the point can be determined using the following function:
Figure BDA0003314715900000161
wherein M isIIs a camera internal reference calibration matrix, MEIs an external reference calibration matrix, Z, of the camera and a three-dimensional coordinate systemcIs the distance of the camera coordinate system relative to the three-dimensional coordinate system.
By calculating the formula (1), the following calculation formula can be obtained:
Figure BDA0003314715900000162
x in three-dimensional coordinates can be obtained based on the formulawAnd y andw. That is, by the above calculation, the vertex of each enemy obstacle two-dimensional detection frame can be obtained, that is, by performing the above perspective change, the vertex coordinates of each low obstacle in the three-dimensional coordinate system can be acquired.
After the coordinate values of the low obstacle in the three-dimensional coordinate system are obtained, along with the running of the unmanned distribution vehicle, the low obstacle detection not only needs a single-frame detection result, but also needs to analyze the monitoring results of continuous multiple frames so as to output a more accurate and more stable low obstacle frame.
In order to analyze the results of multiple consecutive detections, the low obstacle can be transformed to a local coordinate system. For example, a three-dimensional coordinate system can be established by taking the starting point of the driving path of the unmanned vehicle as the origin of coordinates, the position of the static obstacle in the coordinate system does not change along with the form of the vehicle, and the three-dimensional coordinate system of the vehicle body is converted into a local coordinate system and can be determined by only needing one transformation matrix.
And if the historical detection frame list is empty, establishing a historical detection frame list based on the three-dimensional detection frame corresponding to the current frame, and updating the information of the three-dimensional detection frame in the historical detection frame list. If the position of a certain three-dimensional detection frame a in the current frame is close to that of the A detection frame in the historical detection frame list, updating the position of the A detection frame by using a; and if the A detection frame in the history detection frame list is continuously updated by m frames, determining the a detection frame as the obstacle. And if the detection frame which is supposed to be moved into the a detection frame is found in the history detection frame list in the current frame, adding the three-dimensional position information of the a detection frame into the history detection frame list. If the history detection frame list does not have a three-dimensional detection frame which is close to the current frame, the three-dimensional detection frame can be marked, and the detection frame a can be removed from the history detection frame when the state that the three-dimensional detection frame continuous message is detected reaches the preset frame number. Through the above operations, the low obstacle detection frame noise caused by semantic segmentation misdetection can be eliminated, and the low obstacle detection frame noise comprises the suddenly appeared noise frame and the discontinuity of the detection frame on the time sequence, namely, the low obstacle in the road area can be detected, and then the technical effect of effectively avoiding the obstacle according to the low obstacle can be achieved.
According to the technical scheme of the embodiment of the invention, the semantic segmentation result of the two-dimensional image is used as input, the position of a short barrier in a road area in a three-dimensional space is determined, and the position can be output, so that the technical effect of effectively avoiding the barrier according to the output position of the barrier is achieved.
Example four
Fig. 5 is a schematic structural diagram of a service data processing apparatus according to a fourth embodiment of the present invention, where as shown in fig. 5, the apparatus includes: an image to be detected determining module 410, a position information to be converted determining module 420, a target position information determining module 430, and an obstacle determining module 440.
The to-be-detected image determining module 410 is configured to perform segmentation processing on the received to-be-processed image based on an image semantic segmentation algorithm to obtain an to-be-detected image; a to-be-converted position information determining module 420, configured to determine at least one two-dimensional target detection frame in the to-be-detected image and to-be-converted position information of each two-dimensional target detection frame; a target position information determining module 430, configured to obtain, according to the to-be-converted position information of each two-dimensional target detection frame and the transformation matrix, a three-dimensional detection frame corresponding to each two-dimensional target detection frame and target position information of each three-dimensional detection frame; and an obstacle determining module 440, configured to determine a target obstacle based on the target position information.
The technical scheme of the embodiment of the invention comprises the steps of carrying out segmentation processing on a received image to be processed through a semantic segmentation algorithm to obtain the image to be detected, distinguishing a road area and a non-road area in the image to be detected, further determining a two-dimensional target detection frame positioned in the road area, wherein the two-dimensional target detection frame can be a low obstacle, processing the information of the position to be converted of the two-dimensional target detection frame to obtain a three-dimensional target detection frame corresponding to the two-dimensional target detection frame in order to determine the specific position of the low obstacle, solving the technical problem that effective obstacle avoidance cannot be realized due to the fact that effective detection cannot be carried out on the low obstacle in the prior art, realizing the detection of the low obstacle in the road, carrying out path planning according to the detected low obstacle and further achieving the effect of effective obstacle avoidance, meanwhile, the technical effect of driving safety of the unmanned vehicle is also improved.
On the basis of the technical scheme, the to-be-detected image determining module is used for performing classification marking on each pixel point in the to-be-processed image based on the image semantic segmentation algorithm to obtain the to-be-detected image; and the to-be-detected image comprises the category mark of each pixel point.
On the basis of the technical scheme, the to-be-converted position information determining module is used for determining a road area and a non-road area in the to-be-detected image based on a preset category mapping relation to obtain a binary image; the category mapping relationship comprises a corresponding relationship between a category mark of a pixel point and a pixel value; and denoising and connected domain processing the binary image to obtain at least one two-dimensional target detection frame and position information to be converted of the two-dimensional target detection frame.
On the basis of the technical scheme, the to-be-converted position information determining module is used for denoising the binarized image to obtain a target binarized image; clustering each connected domain in the target binary image to obtain at least one connected domain to be processed; determining a two-dimensional detection frame of each connected domain to be processed and an area detection frame of a road area in the target binary image; determining a two-dimensional target detection frame and corresponding position information to be converted according to the area detection frame and each two-dimensional detection frame; and the position information to be converted comprises coordinate information of each vertex in the two-dimensional detection frame.
On the basis of the technical scheme, the to-be-converted position information determining module is used for determining the pixel point coordinates of the edge lines of the to-be-processed connected domains; determining at least one vertex to be processed according to the maximum value and the minimum value of the pixel point coordinates; and taking the vertex coordinates of the at least one vertex to be processed as position information to be converted, and taking an area formed by the connecting lines of the at least one vertex to be processed as a two-dimensional detection frame.
On the basis of the above technical solution, the target location information determining module is configured to:
aiming at each vertex to be processed of each two-dimensional target detection frame, determining target coordinates of the vertex to be processed according to the position information to be converted of the vertex to be processed and the transformation matrix; the transformation matrix is determined based on an internal reference calibration matrix and an external reference calibration matrix of the camera device, the vertical distance between the camera device and a horizontal plane and the distance of a coordinate system; the camera device is used for shooting to obtain the image to be processed; and determining the corresponding three-dimensional detection frame and the target position information of the three-dimensional detection frame according to each target coordinate of each two-dimensional target detection frame.
On the basis of the technical scheme, the target obstacle determining module is used for determining the target obstacle according to the target position information and the historical target position information of the historical image to be processed.
On the basis of the above technical solution, the target obstacle determining module is configured to:
if the historical target position information associated with the historical to-be-processed image does not include the target position information, recording the target position information in a detection frame list, and determining that the target position information corresponds to a target obstacle if the detection frame list includes a preset number of target position information.
On the basis of the above technical solution, the target obstacle determining module is configured to:
if the historical target position information associated with the historical to-be-processed image comprises the target position information, updating the historical target position information in a detection frame list based on the target position information;
and when the detection frame list is detected to comprise a preset number of target position information, determining that the target position information corresponds to the target obstacle.
The obstacle detection device provided by the embodiment of the invention can execute the obstacle detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the invention.
EXAMPLE five
Fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device 50 suitable for use in implementing embodiments of the present invention. The electronic device 50 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 6, electronic device 50 is embodied in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: one or more processors or processing units 501, a system memory 502, and a bus 503 that couples the various system components (including the system memory 502 and the processing unit 501).
Bus 503 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 50 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)504 and/or cache memory 505. The electronic device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 506 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. Memory 502 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 508 having a set (at least one) of program modules 507 may be stored, for instance, in memory 502, such program modules 507 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 507 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
The electronic device 50 may also communicate with one or more external devices 509 (e.g., keyboard, pointing device, display 510, etc.), with one or more devices that enable a user to interact with the electronic device 50, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 511. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 512. As shown, the network adapter 512 communicates with the other modules of the electronic device 50 over the bus 503. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 501 executes various functional applications and data processing, for example, implementing the obstacle detection method provided by the embodiment of the present invention, by executing a program stored in the system memory 502.
EXAMPLE six
An embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing an obstacle detection method when executed by a computer processor.
The method comprises the following steps:
when an image to be processed is received, performing segmentation processing on the image to be processed based on an image semantic segmentation algorithm to obtain an image to be detected;
determining at least one two-dimensional target detection frame in the image to be detected and position information to be converted of the at least one two-dimensional target detection frame;
obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix;
and determining a target obstacle based on the target position information and historical target position information associated with historical to-be-processed images.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (12)

1. An obstacle detection method, applied to an unmanned vehicle, comprising:
carrying out segmentation processing on the received image to be processed based on an image semantic segmentation algorithm to obtain an image to be detected;
determining at least one two-dimensional target detection frame in the image to be detected and information of positions to be converted of the two-dimensional target detection frames;
obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix;
based on each target position information, a target obstacle is determined.
2. The method according to claim 1, wherein the segmenting the received image to be processed based on the image semantic segmentation algorithm to obtain the image to be detected comprises:
classifying and marking each pixel point in the image to be processed based on the image semantic segmentation algorithm to obtain the image to be detected;
and the to-be-detected image comprises the category mark of each pixel point.
3. The method according to claim 1, wherein the determining at least one two-dimensional target detection frame in the image to be detected and the position information to be converted of each two-dimensional target detection frame comprises:
determining a road region and a non-road region in the image to be detected based on a preset category mapping relation to obtain a binary image; the category mapping relationship comprises a corresponding relationship between a category mark of a pixel point and a pixel value;
and denoising and connected domain processing the binary image to obtain at least one two-dimensional target detection frame and position information to be converted of the two-dimensional target detection frame.
4. The method according to claim 3, wherein the obtaining at least one two-dimensional target detection frame and position information to be converted of the two-dimensional target detection frame by denoising and connected domain processing the binarized image comprises:
denoising the binarized image to obtain a target binarized image;
clustering each connected domain in the target binary image to obtain at least one connected domain to be processed;
determining a two-dimensional detection frame of each connected domain to be processed and an area detection frame of a road area in the target binary image;
determining a two-dimensional target detection frame and corresponding position information to be converted according to the area detection frame and each two-dimensional detection frame;
and the position information to be converted comprises coordinate information of each vertex in the two-dimensional detection frame.
5. The method of claim 4, wherein the determining the two-dimensional detection frame of each to-be-processed connected component comprises:
determining pixel point coordinates of edge lines of each connected domain to be processed;
determining at least one vertex to be processed according to the maximum value and the minimum value of the pixel point coordinates;
and taking the vertex coordinates of the at least one vertex to be processed as position information to be converted, and taking an area formed by the connecting lines of the at least one vertex to be processed as a two-dimensional detection frame.
6. The method according to claim 5, wherein obtaining the three-dimensional detection frame corresponding to each two-dimensional target detection frame and the target position information of each three-dimensional detection frame according to the position information to be converted of each two-dimensional target detection frame and the transformation matrix comprises:
aiming at each vertex to be processed of each two-dimensional target detection frame, determining target coordinates of the vertex to be processed according to the position information to be converted of the vertex to be processed and the transformation matrix; the transformation matrix is determined based on an internal reference calibration matrix and an external reference calibration matrix of the camera device, the vertical distance between the camera device and a horizontal plane and the distance of a coordinate system; the camera device is used for shooting to obtain the image to be processed;
and determining the corresponding three-dimensional detection frame and the target position information of the three-dimensional detection frame according to each target coordinate of each two-dimensional target detection frame.
7. The method of claim 1, wherein determining a target obstacle based on the respective target location information comprises:
and determining the target barrier according to the target position information and the historical target position information of the historical image to be processed.
8. The method of claim 7, wherein determining a target obstacle based on the respective target location information and historical target location information of the historical to-be-processed image comprises:
if the target position information is not included in the historical target position information associated with the historical images to be processed, recording the target position information in a detection frame list, and determining that the target position information corresponds to a target obstacle if the preset number of target position information is included in the detection frame list.
9. The method of claim 7, wherein determining a target obstacle based on the respective target location information and historical target location information of the historical to-be-processed image comprises:
if the historical target position information associated with the historical to-be-processed image comprises the target position information, updating the historical target position information in a detection frame list based on the target position information, and recording the continuous occurrence frequency of the target position information;
and when the frequency reaches a preset frequency threshold value, determining that the target position information corresponds to the target obstacle.
10. An obstacle detection device, which is arranged in an unmanned vehicle, includes:
the to-be-detected image determining module is used for segmenting the received to-be-processed image based on an image semantic segmentation algorithm to obtain the to-be-detected image;
the to-be-converted position information determining module is used for determining at least one two-dimensional target detection frame in the to-be-detected image and the to-be-converted position information of each two-dimensional target detection frame;
the target position information determining module is used for obtaining three-dimensional detection frames corresponding to the two-dimensional target detection frames and target position information of the three-dimensional detection frames according to the position information to be converted of the two-dimensional target detection frames and the transformation matrix;
and the obstacle determining module is used for determining the target obstacle based on the target position information.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle detection method of any of claims 1-9.
12. A storage medium containing computer executable instructions for performing the obstacle detection method of any one of claims 1-9 when executed by a computer processor.
CN202111227555.6A 2021-10-21 2021-10-21 Obstacle detection method, obstacle detection device, electronic device, and storage medium Pending CN113963330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111227555.6A CN113963330A (en) 2021-10-21 2021-10-21 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111227555.6A CN113963330A (en) 2021-10-21 2021-10-21 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113963330A true CN113963330A (en) 2022-01-21

Family

ID=79465405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111227555.6A Pending CN113963330A (en) 2021-10-21 2021-10-21 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113963330A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092822A (en) * 2022-01-24 2022-02-25 广东皓行科技有限公司 Image processing method, movement control method, and movement control system
CN114581831A (en) * 2022-03-04 2022-06-03 广东工业大学 Unmanned aerial vehicle obstacle detection and obstacle avoidance method and system based on image and point cloud
CN114723640A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN115468578A (en) * 2022-11-03 2022-12-13 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN116563818A (en) * 2023-04-14 2023-08-08 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN117877001A (en) * 2024-01-19 2024-04-12 元橡科技(北京)有限公司 Obstacle recognition method, obstacle recognition device, electronic equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092822A (en) * 2022-01-24 2022-02-25 广东皓行科技有限公司 Image processing method, movement control method, and movement control system
CN114581831A (en) * 2022-03-04 2022-06-03 广东工业大学 Unmanned aerial vehicle obstacle detection and obstacle avoidance method and system based on image and point cloud
CN114723640A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN114723640B (en) * 2022-05-23 2022-09-27 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN115468578A (en) * 2022-11-03 2022-12-13 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN116563818A (en) * 2023-04-14 2023-08-08 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116563818B (en) * 2023-04-14 2024-02-06 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN117877001A (en) * 2024-01-19 2024-04-12 元橡科技(北京)有限公司 Obstacle recognition method, obstacle recognition device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113963330A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN109635685B (en) Target object 3D detection method, device, medium and equipment
US11042762B2 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN113486797B (en) Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle
US11506769B2 (en) Method and device for detecting precision of internal parameter of laser radar
CN109188438B (en) Yaw angle determination method, device, equipment and medium
CN112967283B (en) Target identification method, system, equipment and storage medium based on binocular camera
KR20180056685A (en) System and method for non-obstacle area detection
CN112669344A (en) Method and device for positioning moving object, electronic equipment and storage medium
CN112037521B (en) Vehicle type identification method and hazardous chemical substance vehicle identification method
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN111950394A (en) Method and device for predicting lane change of vehicle and computer storage medium
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
Arulmozhi et al. Image refinement using skew angle detection and correction for Indian license plates
CN115273039A (en) Small obstacle detection method based on camera
CN109300322B (en) Guideline drawing method, apparatus, device, and medium
CN113887481A (en) Image processing method and device, electronic equipment and medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112511725B (en) Automatic identification method and device for endoscope ring, storage medium and terminal
CN114943836A (en) Trailer angle detection method and device and electronic equipment
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination