CN111890358B - Binocular obstacle avoidance method and device, storage medium and electronic device - Google Patents

Binocular obstacle avoidance method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111890358B
CN111890358B CN202010621929.1A CN202010621929A CN111890358B CN 111890358 B CN111890358 B CN 111890358B CN 202010621929 A CN202010621929 A CN 202010621929A CN 111890358 B CN111890358 B CN 111890358B
Authority
CN
China
Prior art keywords
map
obstacle
point cloud
determining
disparity map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010621929.1A
Other languages
Chinese (zh)
Other versions
CN111890358A (en
Inventor
胡鲲
马子昂
卢维
殷俊
林辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010621929.1A priority Critical patent/CN111890358B/en
Publication of CN111890358A publication Critical patent/CN111890358A/en
Application granted granted Critical
Publication of CN111890358B publication Critical patent/CN111890358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the invention provides a binocular obstacle avoidance method, a binocular obstacle avoidance device, a storage medium and an electronic device, wherein the method comprises the following steps: determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot; calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map; determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles; and controlling the robot to execute obstacle avoidance processing based on the direction information of the obstacle included in the obstacle map. The method and the device solve the problem that the obstacle cannot be accurately avoided in the related technology, achieve the effect of accurately avoiding the obstacle, and improve the accuracy of avoiding the obstacle.

Description

Binocular obstacle avoidance method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a binocular obstacle avoidance method and device, a storage medium and an electronic device.
Background
Nowadays, the robot technology is developing towards the direction of integration, intellectualization and autonomy, and the robot obstacle avoidance technology is a key ring for the ground mobile robot to autonomously complete various tasks and is a prerequisite condition for the robot to successfully identify the surrounding situation and plan a path in an unknown environment.
In the related art, there are two main obstacle avoidance methods, active and passive. The active obstacle avoidance method comprises infrared obstacle avoidance, ultrasonic obstacle avoidance, laser obstacle avoidance and the like; the passive obstacle avoidance method comprises monocular obstacle avoidance, binocular obstacle avoidance and the like. The active obstacle avoidance is easily interfered by environmental factors, for example, infrared rays can be absorbed by a black object, can penetrate through a transparent object and can be interfered by other infrared rays; ultrasonic waves can be absorbed by objects such as sponge and the like and are easily interfered by strong airflow; the laser has high relative precision, but the noise is large, and the sensor is generally expensive, large in volume and high in power consumption. The passive obstacle avoidance does not need other information emitting sources, information is directly acquired by the environment, information transmission is more efficient, the binocular obstacle avoidance is more accurate than the monocular obstacle avoidance, depth information can be acquired, and the passive obstacle avoidance system is used for the ground mobile robot platform and has higher cost performance and applicability. However, the passive obstacle avoidance method adopted in the related art depends on the result of camera calibration to perform coordinate system transformation, and a single transformation method introduces the calibration error into the ground segmentation link, so that the segmentation results of the environment, the obstacles and the ground are inaccurate.
Therefore, the problem that the barriers cannot be accurately avoided exists in the related technology.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a binocular obstacle avoidance method, a binocular obstacle avoidance device, a storage medium and an electronic device, and at least solves the problem that the obstacle cannot be accurately avoided in the related technology.
According to an embodiment of the present invention, there is provided a binocular obstacle avoidance method, including: determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot; calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map; determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles; and controlling the robot to execute obstacle avoidance processing based on the direction information of the obstacle included in the obstacle map.
According to another embodiment of the present invention, there is provided a binocular obstacle avoidance apparatus including: a first determination module for determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot; the conversion module is used for calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map; the second determining module is used for determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles; and the control module is used for controlling the robot to execute obstacle avoidance processing based on the direction information of the obstacles included in the obstacle map.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, after the binocular sensor arranged on the robot acquires the image at the first moment, the disparity map is determined according to the image, the V disparity map is calculated according to the disparity map and is converted into the 3D point cloud map, the obstacle map is determined by using the V disparity map and the 3D point cloud map, and the robot is controlled to execute obstacle avoidance processing according to the azimuth information of the obstacle in the obstacle map. The method and the device have the advantages that the images obtained by the binocular sensor are accurately converted into the depth map, so that the problem that the barriers cannot be accurately avoided in the related technology can be solved, the effect of accurately avoiding the barriers is achieved, and the accuracy rate of avoiding the barriers is improved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a binocular obstacle avoidance method according to an embodiment of the present invention;
fig. 2 is a flowchart of a binocular obstacle avoidance method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of disparity calculation according to an exemplary embodiment of the present invention;
fig. 4 is a schematic diagram of a V-disparity map coordinate transformation according to an exemplary embodiment of the present invention;
fig. 5 is a flowchart of a binocular obstacle avoidance method according to an embodiment of the present invention;
fig. 6 is a block diagram of a binocular obstacle avoidance apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the operation on a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the binocular obstacle avoidance method according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.) and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the binocular obstacle avoidance method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a binocular obstacle avoidance method is provided, and fig. 2 is a flowchart of the binocular obstacle avoidance method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, determining a disparity map of an image acquired by a binocular sensor arranged on the robot at a first moment;
step S204, calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map;
step S206, determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles;
and step S208, controlling the robot to execute obstacle avoidance processing based on the direction information of the obstacles included in the obstacle map.
In the above embodiment, the left and right eye images from the binocular sensor are received in real time, and the disparity map of the image is determined according to the left and right eye images. The disparity map of the frame image can be obtained through an SGBM/BM stereo matching algorithm, and of course, other methods can be adopted to obtain the disparity map of the image. After the disparity map of the image is determined, the V disparity map can be calculated according to the disparity map. The V disparity map may be represented as a matrix in the form of V (m, n). When the disparity map is represented in a matrix form, such as D (M, N), the calculation process of the V disparity map may be: m, n, maximum disparity value +1, v (v)iThe value of d) is originalV-th in the disparity map D (M, N)iThe line parallax is the number of elements of d.
In the above embodiment, the disparity map D (M, N) may be converted into a 3D point cloud map by using the principle of pinhole imaging and disparity calculation according to the relevant parameters of the binocular sensor. The pinhole imaging mainly utilizes the similarity principle of camera internal reference and imaging, and the method is more basic and is not repeated; the schematic diagram of parallax calculation can be seen in fig. 3, as shown in fig. 3:
the disparity of the same point of the left and right images obtained by stereo matching can be expressed as: d ═ xl-xr. Also according to the similar principle of triangles, the following results can be obtained:
Figure BDA0002565431380000051
wherein Z is depth, f is camera focal length, and b is camera baseline distance. And combining the horizontal and vertical coordinates of the P point in the visual coordinate system obtained by calculation by the pinhole imaging method to obtain a three-dimensional coordinate P (X, Y, Z), and finally obtaining a three-dimensional reconstruction point cloud picture C (corresponding to the 3D point cloud picture) of the scene.
For example, the main body for executing the above steps may be a robot, or a processor, or a machine with similar processing capability.
According to the invention, after the binocular sensor arranged on the robot acquires the image at the first moment, the disparity map is determined according to the image, the V disparity map is calculated according to the disparity map and is converted into the 3D point cloud map, the obstacle map is determined by using the V disparity map and the 3D point cloud map, and the robot is controlled to carry out obstacle avoidance processing according to the azimuth information of the obstacle in the obstacle map. The method and the device have the advantages that the images obtained by the binocular sensor are accurately converted into the depth map, so that the problem that the barriers cannot be accurately avoided in the related technology can be solved, the effect of accurately avoiding the barriers is achieved, and the accuracy of avoiding the barriers is improved.
In one exemplary embodiment, determining an obstacle map using the V disparity map and the 3D point cloud map includes: performing plane fitting on the 3D point cloud with the height lower than a first height threshold value in the 3D point cloud picture to obtain a fitting plane; determining a linear equation corresponding to the fitting plane in the V disparity map by adopting a linear fitting mode; inversely transforming the linear equation into the 3D point cloud image to obtain ground point cloud data; determining the obstacle map based on the ground point cloud data. In this embodiment, after obtaining the 3D point cloud image, because the amount of the point cloud data is large, a plane fitting operation may be performed on the 3D point cloud that is lower than the first height threshold by using the RANSAC method, interference caused by the point cloud data above a specific height is eliminated, the accuracy of ground fitting is improved, and a fitted plane is obtained.
After the fitting plane is determined, a straight line equation corresponding to the fitting plane can be determined in the V-disparity map by adopting a straight line fitting manner. The ground surface is filtered by using the characteristics of the V-disparity map V (m, n) and fusing the plane fitting result, and the detailed process is described as follows:
the V-disparity map can be physically understood as moving the original origin of the pixel coordinate system from the upper left corner of the image to the center point of the image, and this operation brings about a corresponding coordinate transformation. Therefore, the plane equation of the ground in the world coordinates is transformed into an inclined straight line under the V-disparity map coordinate system through the coordinate transformation:
Figure BDA0002565431380000061
wherein theta represents an inclination angle of the optical axis of the binocular camera with respect to the horizontal direction,
Figure BDA0002565431380000062
the pixel difference (delta ═ u) of the same pixel point in the polar line direction in the left and right imagesl-ur) The schematic diagram of coordinate transformation of the V disparity map can be seen in fig. 4.
Likewise, for inclined ground (parallel to X)wBut not parallel to Zw,Zw=αYw+ β), the plane equation of which is also a straight line in the V-disparity map coordinate system, the expression is:
Figure BDA0002565431380000063
and (3) adopting a straight line fitting method in the V parallax map according to the formula (2) and the formula (3), and taking a plane group in the world coordinate system corresponding to the straight line l obtained by fitting as a ground equation. Meanwhile, the plane fitting result is synthesized, the V disparity map result group is taken as the main part, the plane fitting result plane is taken as the additional reference, corresponding weight values lambda and mu are respectively given, and finally, an accurate ground segmentation result can be given: λ × group + μ × plane, (λ + μ ═ 1, λ > μ).
In the embodiment, plane fitting and V disparity map coordinate transformation are combined, respective confidence weights of the two methods are assigned, and the accuracy and the stability of the ground segmentation method are improved. In addition, the horizontal ground and the slope are respectively projected into straight lines with different slopes by the projection transformation of the V parallax, three-dimensional information can be compressed to two dimensions, meanwhile, the precision is not lost, and the influence of noise points is not easily caused, so that the accuracy of obstacle avoidance can be improved.
In one exemplary embodiment, determining the obstacle map based on the ground point cloud data comprises: determining a ground passable area based on the ground point cloud data; acquiring a depth map of a region except the ground passable region in the current scene; carrying out binarization processing on the depth map based on a preset distance threshold value to obtain a binary map; and performing connectivity judgment and contour extraction processing on the binary map to obtain the obstacle map. In the embodiment, a ground passable area is determined according to ground point cloud data, after the ground is segmented and filtered, a depth map depth of the current scene except the ground passable area is determined, and a distance threshold l is set according to obstacle avoidance requirementsminThe depth map is binarized (corresponding to the distance threshold). If the depth value depth (i, j) of the current pixel point is less than lminThen the same size to be createdThe gray value B (i, j) of the pixel at the same position of the binary image B is 255; on the contrary, if depth (i, j) > lminThe grayscale value of B (i, j) is set to 0. Then, connectivity judgment and contour extraction are carried out on the binary image B. The connectivity judgment can be to successively corrode and expand the image, which is also called open operation, so as to reduce the influence of noise on the extraction of the outline of the obstacle; contour extraction can be used for indicating the specific position and direction of the obstacle in the image, and the subsequent autonomous decision of the mobile robot is facilitated. Meanwhile, image processing means such as binaryzation, connected domain judgment, contour extraction and the like are adopted, so that the robot can be helped to construct an accurate obstacle map to complete an obstacle avoidance task.
In one exemplary embodiment, controlling the robot to perform obstacle avoidance processing based on the orientation information of the obstacle included in the obstacle map includes: determining a target obstacle with a size larger than a preset size included in the obstacle map; and controlling the robot to execute obstacle avoidance processing for avoiding the target obstacle after the distance from the robot to the target obstacle is smaller than a preset distance. In the present embodiment, the potential obstacle map B is a map in which the extracted contours are all potential obstacles. Considering the actual obstacle crossing capability of the mobile robot (for example, how many obstacles are higher than the ground and possibly influence the traveling of the robot), a minimum obstacle size A (corresponding to the preset size) is preset, when the extracted contour S is larger than A, the area is judged as an obstacle area, and then an algorithm sends a parking signal to the robot; otherwise, executing obstacle avoidance processing.
The following description of the binocular obstacle avoidance is made with reference to the embodiments:
fig. 5 is a flowchart of a binocular obstacle avoidance method according to an embodiment of the present invention, and as shown in fig. 5, the flowchart includes:
step S502, receiving left and right eye images from a binocular sensor in real time, judging whether a data frame is transmitted, and if the judgment result is negative, that is, the data frame is not received, executing step S520; if the determination result is yes, the algorithm continues to execute step S504.
And step S504, obtaining a disparity map of the frame image through an SGBM/BM stereo matching algorithm (a detailed use method depends on the performance of an application platform).
In step S506, a V disparity map is calculated from the disparity map obtained in step S504, and the V disparity map is represented as a matrix of V (m, n). If the disparity map of step S504 is also represented in a matrix form, such as D (M, N), the calculation process of the V disparity map can be described as follows: m, n, maximum disparity value +1, v (v)iThe value of D) is the v-th value in the original disparity map D (M, N)iThe line parallax is the number of elements of d.
And step S508, converting the parallax image D (M, N) obtained in the step S504 into a 3D point cloud image C by using the pinhole imaging and parallax calculation principles according to the related parameters of the binocular sensor.
Step S510, fitting the ground point cloud on a plane.
Step S512, separating and filtering the ground part.
Step S514, connectivity judgment and contour extraction, after the ground is segmented and filtered, a depth map depth of the current scene except the ground passable area is obtained, and a distance threshold value l is set according to the obstacle avoidance requirementminBinarizing the depth map: if the depth value depth (i, j) of the current pixel point is less than lminSetting the gray value B (i, j) of the pixel at the same position of the created binary image B with the same size as 255; on the contrary, if depth (i, j) > lminThe grayscale value of B (i, j) is set to 0. Then, connectivity judgment and contour extraction are carried out on the binary image B: the connectivity judgment mainly comprises the steps of corroding and expanding the image in sequence, which is also called open operation, and aims to reduce the influence of noise on the extraction of the outline of the obstacle; the contour extraction is mainly used for indicating the specific position and direction of the obstacle in the image, and is helpful for subsequent autonomous decision of the mobile robot.
Step S516, determining whether the size of the obstacle is smaller than an obstacle avoidance threshold, if so, executing step S518, and if not, executing step S502. The method comprises the steps that the actual obstacle crossing capability of the mobile robot is considered (for example, the number of obstacles higher than the ground possibly affects the traveling of the robot), a minimum obstacle size A is preset, when an extracted outline S is larger than A, the area is judged to be an obstacle area, and at the moment, an algorithm sends a parking signal to the robot; otherwise, the algorithm does not respond, and returns to step S502 to wait for data input or end directly.
And step S518, the robot stops when encountering obstacles.
Step S520, the algorithm ends.
In the embodiment, the 3D point cloud (namely, the depth information is resolved) of the scene is obtained by carrying out re-projection transformation by using camera parameters and a pinhole imaging principle; then, ground point cloud is preliminarily screened out through presetting a point cloud height threshold, so that the data volume of the point cloud is reduced, and the real-time performance of the algorithm is improved; and finally, fitting the plane according to the RANSAC method to obtain a plane equation of the ground. And compressing the 3D point cloud data into 2D coordinate data by utilizing the coordinate transformation of the disparity map and the V disparity map, fitting a linear equation of the 2D coordinate data in a V disparity coordinate system, and inversely transforming the linear equation to the point cloud corresponding to the 3D space to obtain the ground point cloud data. The operation of the three-dimensional space can be simplified to the two-dimensional space, and the corresponding ground point cloud data can be screened out. In addition, a 3D point cloud fitting plane method and a V disparity map fitting straight line inverse transformation method are fused, and ground point cloud data are accurately segmented and filtered; by means of methods such as binarization, connected domain judgment and the like, image noise interference is effectively removed, and a barrier map of the environment where the robot is located is accurately established. In addition, the obstacle avoidance method is only related to parameters and structures of the binocular sensor, advanced training and learning are not needed, and real-time obstacle avoidance requirements of unknown scenes can be met by resolving and constructing an obstacle map in real time.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The present embodiment further provides a binocular obstacle avoidance device, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the binocular obstacle avoidance device is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a binocular obstacle avoidance apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes:
a first determination module 62 for determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot;
a conversion module 64, configured to calculate a V disparity map based on the disparity map and convert the disparity map into a 3D point cloud map;
a second determining module 66, configured to determine an obstacle map by using the V disparity map and the 3D point cloud map, where the obstacle map includes azimuth information of an obstacle;
a control module 68 configured to control the robot to perform obstacle avoidance processing based on the orientation information of the obstacle included in the obstacle map.
In an exemplary embodiment, the second determining module 66 includes: the fitting unit is used for performing plane fitting on the 3D point cloud with the height lower than a first height threshold value in the 3D point cloud image to obtain a fitting plane; the first determining unit is used for determining a linear equation corresponding to the fitting plane in the V disparity map by adopting a linear fitting mode; the transformation unit is used for inversely transforming the linear equation into the 3D point cloud picture so as to obtain ground point cloud data; a second determination unit for determining the obstacle map based on the ground point cloud data.
In one exemplary embodiment, the second determination unit includes: a determining subunit, configured to determine a ground passable area based on the ground point cloud data; the acquisition subunit is used for acquiring a depth map of an area except the ground passable area in the current scene; the processing subunit is used for carrying out binarization processing on the depth map based on a preset distance threshold value so as to obtain a binary map; and the extracting subunit is used for performing connectivity judgment and contour extraction processing on the binary map to obtain the obstacle map.
In an exemplary embodiment, the control module 68 includes: a third determination unit configured to determine a target obstacle having a size larger than a preset size included in the obstacle map; and the control unit is used for controlling the robot to execute obstacle avoidance processing for avoiding the target obstacle after the distance from the robot to the target obstacle is less than a preset distance.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and they may be implemented in program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed in an order different from that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps therein may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A binocular obstacle avoidance method is characterized by comprising the following steps:
determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot;
calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map;
determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles;
controlling the robot to perform obstacle avoidance processing based on the orientation information of the obstacle included in the obstacle map;
wherein converting the disparity map into the 3D point cloud map comprises:
determining the same-point parallax d ═ x of left and right images included in the parallax mapl-xrWherein x islIs the abscissa, x, of the left imagerIs the abscissa of the right image;
according to the triangle similarity principle, the following formula is obtained:
Figure FDA0003545688550000011
wherein Z is depth, f is camera focal length, and b is camera baseline distance;
and converting the disparity map into the 3D point cloud map by combining the formula and a pinhole imaging method.
2. The method of claim 1, wherein determining an obstacle map using the V disparity map and the 3D point cloud map comprises:
performing plane fitting on the 3D point cloud with the height lower than a first height threshold value in the 3D point cloud picture to obtain a fitting plane;
determining a linear equation corresponding to the fitting plane in the V disparity map by adopting a linear fitting mode;
inversely transforming the linear equation into the 3D point cloud image to obtain ground point cloud data;
determining the obstacle map based on the ground point cloud data.
3. The method of claim 2, wherein determining the obstacle map based on the ground point cloud data comprises:
determining a ground passable area based on the ground point cloud data;
acquiring a depth map of a region except the ground passable region in the current scene;
carrying out binarization processing on the depth map based on a preset distance threshold value to obtain a binary map;
and performing connectivity judgment and contour extraction processing on the binary map to obtain the obstacle map.
4. The method of claim 3, wherein obtaining a depth map of an area of the current scene other than the ground accessible area comprises:
determining a plane group in a world coordinate system corresponding to the linear equation;
taking the group as a main part, taking the fitting plane as an additional reference, and respectively assigning corresponding weights λ and μ to determine a segmentation result obtained by segmenting the ground corresponding to the ground point cloud data, wherein λ + μ is 1 and λ is greater than μ;
determining a depth map of an area other than the ground accessible area under the current scene based on the segmentation result.
5. The method according to claim 1, wherein controlling the robot to perform obstacle avoidance processing based on the orientation information of the obstacle included in the obstacle map includes:
determining a target obstacle with a size larger than a preset size included in the obstacle map;
and controlling the robot to execute obstacle avoidance processing for avoiding the target obstacle after the distance from the robot to the target obstacle is smaller than a preset distance.
6. A binocular obstacle avoidance device is characterized by comprising:
a first determination module for determining a disparity map of an image acquired at a first time by a binocular sensor provided on the robot;
the conversion module is used for calculating a V disparity map based on the disparity map and converting the disparity map into a 3D point cloud map;
the second determining module is used for determining an obstacle map by using the V disparity map and the 3D point cloud map, wherein the obstacle map comprises azimuth information of obstacles;
a control module for controlling the robot to execute obstacle avoidance processing based on the orientation information of the obstacle included in the obstacle map;
the conversion module can convert the disparity map into the 3D point cloud map by the following method:
determining the same point parallax d ═ x of the left and right images included in the parallax mapl-xrWherein x islIs the abscissa, x, of the left imagerIs the abscissa of the right image;
according to the triangle similarity principle, the following formula is obtained:
Figure FDA0003545688550000031
wherein Z is depth, f is camera focal length, and b is camera baseline distance;
and converting the disparity map into the 3D point cloud map by combining the formula and a pinhole imaging method.
7. The apparatus of claim 6, wherein the second determining module comprises:
the fitting unit is used for performing plane fitting on the 3D point cloud with the height lower than a first height threshold value in the 3D point cloud image to obtain a fitting plane;
the first determining unit is used for determining a linear equation corresponding to the fitting plane in the V disparity map by adopting a linear fitting mode;
the transformation unit is used for inversely transforming the linear equation into the 3D point cloud image so as to obtain ground point cloud data;
a second determination unit for determining the obstacle map based on the ground point cloud data.
8. The apparatus according to claim 7, wherein the second determining unit comprises:
a determining subunit, configured to determine a ground passable area based on the ground point cloud data;
the acquisition subunit is used for acquiring a depth map of an area except the ground passable area in the current scene;
the processing subunit is used for carrying out binarization processing on the depth map based on a preset distance threshold value so as to obtain a binary map;
and the extracting subunit is used for performing connectivity judgment and contour extraction processing on the binary map to obtain the obstacle map.
9. The apparatus of claim 6, wherein the control module comprises:
a third determination unit configured to determine a target obstacle having a size larger than a preset size included in the obstacle map;
and the control unit is used for controlling the robot to execute obstacle avoidance processing for avoiding the target obstacle after the distance from the robot to the target obstacle is less than a preset distance.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN202010621929.1A 2020-07-01 2020-07-01 Binocular obstacle avoidance method and device, storage medium and electronic device Active CN111890358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621929.1A CN111890358B (en) 2020-07-01 2020-07-01 Binocular obstacle avoidance method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621929.1A CN111890358B (en) 2020-07-01 2020-07-01 Binocular obstacle avoidance method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111890358A CN111890358A (en) 2020-11-06
CN111890358B true CN111890358B (en) 2022-06-14

Family

ID=73191950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621929.1A Active CN111890358B (en) 2020-07-01 2020-07-01 Binocular obstacle avoidance method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111890358B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116901085B (en) * 2023-09-01 2023-12-22 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN108269281A (en) * 2016-12-30 2018-07-10 无锡顶视科技有限公司 Avoidance technical method based on binocular vision
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN110209184A (en) * 2019-06-21 2019-09-06 太原理工大学 A kind of unmanned plane barrier-avoiding method based on binocular vision system
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
CN111047636A (en) * 2019-10-29 2020-04-21 轻客智能科技(江苏)有限公司 Obstacle avoidance system and method based on active infrared binocular vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN108269281A (en) * 2016-12-30 2018-07-10 无锡顶视科技有限公司 Avoidance technical method based on binocular vision
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN110209184A (en) * 2019-06-21 2019-09-06 太原理工大学 A kind of unmanned plane barrier-avoiding method based on binocular vision system
CN111047636A (en) * 2019-10-29 2020-04-21 轻客智能科技(江苏)有限公司 Obstacle avoidance system and method based on active infrared binocular vision
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera

Also Published As

Publication number Publication date
CN111890358A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN112132972B (en) Three-dimensional reconstruction method and system for fusing laser and image data
CN110147706B (en) Obstacle recognition method and device, storage medium, and electronic device
CN110148144B (en) Point cloud data segmentation method and device, storage medium and electronic device
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
WO2020023982A2 (en) Method and apparatus for combining data to construct a floor plan
CN112912920A (en) Point cloud data conversion method and system for 2D convolutional neural network
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
CN111998862B (en) BNN-based dense binocular SLAM method
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
CA3209009A1 (en) Computer vision systems and methods for supplying missing point data in point clouds derived from stereoscopic image pairs
WO2023164845A1 (en) Three-dimensional reconstruction method, device, system, and storage medium
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
CN114782636A (en) Three-dimensional reconstruction method, device and system
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN111890358B (en) Binocular obstacle avoidance method and device, storage medium and electronic device
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
Hamzah et al. Stereo matching algorithm based on illumination control to improve the accuracy
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
Karantzalos et al. Model-based building detection from low-cost optical sensors onboard unmanned aerial vehicles
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN116740514A (en) Space-time error tolerant multi-agent cooperative sensing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant