CN112308033B - Obstacle collision warning method based on depth data and visual chip - Google Patents

Obstacle collision warning method based on depth data and visual chip Download PDF

Info

Publication number
CN112308033B
CN112308033B CN202011336291.3A CN202011336291A CN112308033B CN 112308033 B CN112308033 B CN 112308033B CN 202011336291 A CN202011336291 A CN 202011336291A CN 112308033 B CN112308033 B CN 112308033B
Authority
CN
China
Prior art keywords
robot
target obstacle
rectangular frame
obstacle
virtual rectangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011336291.3A
Other languages
Chinese (zh)
Other versions
CN112308033A (en
Inventor
戴剑锋
赖钦伟
肖刚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202011336291.3A priority Critical patent/CN112308033B/en
Publication of CN112308033A publication Critical patent/CN112308033A/en
Application granted granted Critical
Publication of CN112308033B publication Critical patent/CN112308033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an obstacle collision warning method and a vision chip based on depth data, wherein the obstacle collision warning method comprises the following steps: calculating and obtaining the actual physical size of the target obstacle according to the depth image of the outline of the target obstacle, the depth information of the target obstacle and the internal and external parameters of the TOF camera, which are acquired currently by the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the advancing plane of the robot; when the robot walks to the inside of this virtual rectangular frame and detects that the current walking direction of the robot is a tendency of the collision target obstacle, the robot is controlled to trigger a collision warning signal. By setting the rectangular frame with collision early warning meaning on the basis of the actual physical size of the target obstacle, the robot avoids the collision obstacle in the necessary position area in advance, and the influence of the target obstacle on the normal operation of the robot is reduced.

Description

Obstacle collision warning method based on depth data and visual chip
Technical Field
The invention relates to the technical field of intelligent robot collision early warning, in particular to an obstacle collision warning method based on depth data and a vision chip.
Background
At present, SLAM robots based on inertial navigation, vision and laser are more and more popular, and a household floor sweeping cleaning robot is strong in representativeness, and the indoor environment is positioned and built in real time by combining data of vision, laser, a gyroscope, acceleration and a wheel odometer, and positioning navigation is realized according to the built map. However, in the complex obstacle environment, there are often obstacles on the ground, such as toys, wires, etc., which can move, and when the robot collides with the obstacle, the robot pushes against the obstacle or is entangled by the wire-type obstacle. Therefore, before the obstacle avoidance operation is performed, a collision warning signal needs to be given to the robot, but in the prior art, chinese patent CN110622085a relies on feedback from a physical collision sensor to obtain a warning signal in the process of decelerating to approach the target obstacle, so that the robot is easy to physically collide with the target obstacle, and the normal operation of the robot is affected.
Disclosure of Invention
In order to solve the technical problems, the obstacle collision warning method based on depth data disclosed by the invention simply and effectively pre-warns the robot that collision occurs in advance, and the specific technical scheme is as follows:
a depth data-based obstacle collision warning method, comprising: calculating and obtaining the actual physical size of the target obstacle according to the depth image of the outline of the target obstacle, the depth information of the target obstacle and the internal and external parameters of the TOF camera, which are acquired currently by the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the advancing plane of the robot; when the robot walks to the inside of this virtual rectangular frame and it is detected that the current walking direction of the robot is toward the collision target obstacle, the robot is controlled to trigger a collision warning signal.
Compared with the prior art, the technical scheme has the advantages that the rectangular frame with collision early warning significance is arranged on the basis of the actual physical size of the target obstacle, the collision warning signal of the robot is triggered in the rectangular frame, the robot avoids the collision obstacle in advance in a necessary position area, and the influence of the target obstacle on the normal operation of the robot is reduced.
Further, the step of judging that the robot walks to the inside of this virtual rectangular frame includes: judging whether the included angles formed by three different endpoints of the virtual rectangular frame relative to the current walking direction of the robot are acute angles, if yes, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot walks into the virtual rectangular frame; the actual physical size of the target obstacle comprises coordinate information of four different endpoints of the virtual rectangular frame, namely coordinates of the four different endpoints relative to the center of the robot body; the included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: and the deflection angle formed by the connecting line of the end point of the virtual rectangular frame and the center of the robot body relative to the current walking direction of the robot.
According to the technical scheme, the relative angle position relation between different endpoints of the virtual rectangular frame and the real-time pose of the robot is utilized to judge that the robot walks to the inside of the virtual rectangular frame, and compared with the prior art, when the robot is close enough to a target obstacle, the virtual rectangular frame surrounding the target obstacle is utilized to trigger a signal that the robot detects the obstacle, but the behavior of the robot is not limited by setting a safety threshold value.
Further, the step of judging that the current traveling direction of the robot is toward collision with the target obstacle after the robot is inside the virtual rectangular frame includes: judging whether an included angle formed by a connecting line of the body center of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle. According to the technical scheme, whether the movement trend of the robot positioned in the virtual rectangular frame collides with the target obstacle or not is judged by utilizing the relative angle relation between the center of the virtual rectangular frame and the real-time pose of the robot, and a safety threshold value is not set to limit the movement of the robot.
Further, a virtual rectangular frame for surrounding the depth image of the outline of the target obstacle is set based on the horizontal distance of the leftmost side of the target obstacle from the body center of the robot, and the horizontal distance of the rightmost side of the target obstacle from the body center of the robot, and coordinates of four different end points of this virtual rectangular frame with respect to the body center of the robot are determined such that the center of the virtual rectangular frame is the center of the target obstacle.
Further, the depth image of the outline of the target obstacle is obtained by filtering the depth image data acquired by the TOF camera and analyzing a connected domain so as to obtain the segmented image outline coordinate information; the depth image data acquired by the TOF camera is depth image data of a target obstacle in an effective ranging range of the TOF camera and a visual angle range of the TOF camera. Thereby analyzing the shape of the target obstacle and the horizontal ground coverage.
Further, on the basis of acquiring a depth image of the outline of the target obstacle, depth information of the target obstacle and internal and external parameters of the TOF camera, converting image outline coordinate information of the target obstacle into a world coordinate system from an imaging plane of the TOF camera by utilizing a triangle principle; wherein the result of the conversion includes: in the overlapping area of the visual angle range and the effective ranging range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the body center of the robot, the horizontal distance between the rightmost side of the target obstacle and the body center of the robot and the longitudinal height information of the target obstacle. According to the technical scheme, the three-dimensional outline characteristics of the target obstacle are restored, the 3-dimensional coordinate information around the target obstacle is detected, and therefore the obstacle condition in front of the robot can be located.
Further, if the outline of the target obstacle is not all within the view angle range of the TOF camera and/or the effective ranging range of the TOF camera, the center of the virtual rectangular frame set by the robot is still the center of the target obstacle. In the technical scheme, if the target obstacle is too large, the partial area of the target obstacle is out of the visual angle range of the TOF camera and/or out of the effective ranging range of the TOF camera, the depth information of the partial area is not collected by the TOF camera, the set center of the virtual rectangular frame is still the center of the target obstacle, and the collision early warning robustness of the robot positioned in the virtual rectangular frame is improved.
A vision chip stores a program corresponding to the obstacle collision warning method based on depth data, and is used for controlling a robot to trigger a collision warning signal before touching a target obstacle in an overlapping area of a visual angle range of a TOF camera and an effective ranging range of the TOF camera. According to the technical scheme, the shape and the range of the obstacle are analyzed on the basis of acquiring the contour depth information output by the TOF camera, and the collision warning signal is triggered when the robot is close to the obstacle.
Drawings
Fig. 1 is a schematic diagram of a robot according to an embodiment of the present invention to collect a depth image of an outline of an acquired target obstacle (center point O) and set a virtual rectangular frame abcd for surrounding the target obstacle, wherein a real-time position R of the robot of fig. 1 is not within the virtual rectangular frame abcd.
Fig. 2 is a schematic diagram of a real-time position R1 of a robot disclosed in the second embodiment of the present invention within a virtual rectangular frame abcd and a current traveling direction P1 of the robot tends to collide with a target obstacle.
Fig. 3 is a schematic diagram showing that the real-time position R2 of the robot disclosed in the third embodiment of the present invention is within the virtual rectangular frame abcd and the current traveling direction P2 of the robot does not tend to collide with the target obstacle.
Detailed Description
The following describes the technical solution in the embodiment of the present invention in detail with reference to the drawings in the embodiment of the present invention. It should be noted that, in the present application, the whole text of chinese patent CN111624997a is incorporated into the text of the present application, and the inner and outer parameters of the TOF camera are used to complete the calculation of the relative coordinate position of the target obstacle, the longitudinal height information of the space occupied by the target obstacle, and the horizontal distance (contour width) between the leftmost side of the target obstacle and the rightmost side of the target obstacle within the field of view of the TOF camera based on the trigonometric principle calculation method of CN111624997 a.
The embodiment of the invention discloses an obstacle collision warning method based on depth data, which comprises the following steps: calculating and obtaining the actual physical size of the target obstacle according to the depth image of the outline of the target obstacle, the depth information of the target obstacle and the internal and external parameters of the TOF camera, which are acquired currently by the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the advancing plane of the robot; then, when the robot walks to the inside of this virtual rectangular frame and detects that the current walking direction of the robot is a tendency of the collision target obstacle, the robot is controlled to trigger a collision warning signal. Compared with the prior art, the method has the advantages that the rectangular frame with collision early warning significance is arranged on the basis of the actual physical size of the target obstacle, the collision warning signal of the robot is triggered in the rectangular frame, the robot avoids the collision obstacle in advance in a necessary position area, and the influence of the target obstacle on the normal operation of the robot is reduced. And reminds the robot to re-plan the working path.
Specifically, the step of judging that the robot walks to the inside of this virtual rectangular frame includes: judging that the sum of angles of included angles formed by three different endpoints of the virtual rectangular frame relative to the current walking direction of the robot is smaller than 90 degrees, if yes, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot has walked into the virtual rectangular frame, namely, when the sum of angles of included angles formed by three different endpoints of the virtual rectangular frame relative to the current walking direction of the robot is larger than or equal to 90 degrees, determining that the robot has entered into the virtual rectangular frame, and taking note that the actual physical size of the target obstacle comprises coordinate information of four different endpoints of the virtual rectangular frame. The theoretical basis for judging whether the robot walks to the inside of the virtual rectangular frame is derived from the circumference angle theorem, wherein the virtual rectangular frame is provided with an external circle, and when the virtual rectangular frame is provided with angles of included angles formed by three different endpoints relative to the current walking direction of the robot and equal to 90 degrees, the robot starts to enter the virtual rectangular frame. The included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: the line connecting the end point and the center of the robot body forms a deflection angle relative to the current walking direction of the robot. According to the embodiment, the relative angle position relation between different endpoints of the virtual rectangular frame and the real-time pose of the robot is utilized to judge that the robot walks to the inside of the virtual rectangular frame, and compared with the prior art, when the robot is close enough to a target obstacle, the virtual rectangular frame surrounding the target obstacle is utilized to trigger a signal that the robot detects the obstacle, but the behavior of the robot is not limited by setting a safety threshold value.
Specifically, the determination step that the robot is inside the virtual rectangular frame (including being located on the rectangular side of the virtual rectangular frame) and the current traveling direction of the robot is a direction toward collision with the target obstacle includes: judging whether an included angle formed by a connecting line of the body center of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle. According to the embodiment, whether the movement trend of the robot positioned in the virtual rectangular frame collides with the target obstacle is judged by utilizing the relative angle relation between the center of the virtual rectangular frame and the real-time pose of the robot, compared with the prior art, the safety threshold is not set to limit the movement of the robot, the collision sensor is prevented from being used for detecting and early warning under the condition of approaching the target obstacle, and meanwhile, after the fact that the movement trend of the robot does not collide with the target obstacle is determined, the movement trend of the robot is also deduced to deviate from the target obstacle.
The depth image is also referred to as a distance image, and refers to an image in which a distance between each pixel point of the depth image and an actual measurement point of a corresponding obstacle that is photographed is taken as a pixel value. The offset angle between each pixel point and the corresponding measurement point is determined based on the setting parameters of the image pickup device. The depth image directly reflects the geometric outline of the visible surface of each obstacle in the photographed physical scene, and the depth image can be converted into space point cloud data through coordinate conversion. Each obstacle described by the depth data in the depth image can be used as an image of the obstacle to be identified for processing in a subsequent step. Wherein the obstacle should be taken to broadly include objects temporarily placed on a plane of travel and objects that are not easily moved. Depending on the actual application environment, the plane of travel of the robot includes, but is not limited to, cement floors, painted floors, floors for laying composite floors, floors for laying solid wood floors, floors for laying carpets, desktops, glass surfaces, and the like. Examples of the object temporarily placed on the traveling plane include a threshold (which can be crossed), a toy (which can inhibit collision), an electric wire (which can inhibit crossing), and the like; examples of the object which is not easily moved include a sofa (when the height of the sofa bottom is lower than the height of the machine, the machine cannot be controlled to enter), a wall, etc.
Embodiment one:
as shown in fig. 1, the robot walks to a position R, the target obstacle in fig. 1 is in the view field area of the TOF camera set at the front end of the robot, that is, in the overlapping area of the effective ranging range of the TOF camera and the view angle range of the TOF camera, the TOF camera acquires a depth image of the contour of the target obstacle, where the depth image of the contour of the target obstacle is an image contour obtained by filtering the depth image data of the target obstacle acquired by the TOF camera and analyzing a connected domain to divide the depth image contour, and the depth image includes image contour coordinate information of the target obstacle; and combining the internal and external parameters of the TOF camera, and controlling the image contour coordinate information to calculate the contour width of the target obstacle, wherein the contour width comprises the horizontal distance L_L between the leftmost side and the center of the robot body, the horizontal distance R_L between the rightmost side of the target obstacle and the center of the robot body and the longitudinal height H of the target obstacle. In this embodiment, based on the horizontal distance l_l between the leftmost side of the target obstacle and the center of the robot body, the horizontal distance r_l between the rightmost side of the target obstacle and the center of the robot body, the coordinate range of the endpoint a, the coordinate range of the endpoint b, the coordinate range of the endpoint c, and the coordinate range of the endpoint d are limited, so as to determine the position range of the virtual rectangular frame abcd, and to frame a virtual rectangular frame abcd capable of enclosing the contour of the target obstacle on the traveling plane of the robot, and to determine the coordinates of four different endpoints of the virtual rectangular frame with respect to the center of the robot body, the coordinates of the center point O of the virtual rectangular frame abcd may be calculated, so that the center O of the virtual rectangular frame is the center O of the target obstacle; the coordinate range of the endpoint a, the coordinate range of the endpoint b, the coordinate range of the endpoint c and the coordinate range of the endpoint d are all outside two ends of the target obstacle in the view field area of the TOF camera. Therefore, the shape and the horizontal ground coverage range of the target obstacle are analyzed, the three-dimensional outline characteristics of the target obstacle are restored, and the 3-dimensional coordinate information around the target obstacle is detected, so that the situation of the obstacle in front of the robot can be positioned.
On the basis of acquiring a depth image of the outline of the target obstacle, depth information of the target obstacle and internal and external parameters of the TOF camera, converting image outline coordinate information of the target obstacle into a world coordinate system from an imaging plane of the TOF camera by utilizing a triangle principle; wherein the result of the conversion includes: in the overlapping area of the visual angle range and the effective ranging range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the body center of the robot, the horizontal distance between the rightmost side of the target obstacle and the body center of the robot and the longitudinal height information of the target obstacle. The three-dimensional outline features of the target obstacle are restored, so that 3-dimensional coordinate information around the target obstacle is detected, and the obstacle situation in front of the robot can be located.
As shown in fig. 1, the current walking direction of the robot is P, i.e., the target obstacle facing the front of the body of the robot, corresponding to the ray RP of fig. 1; for the current view field area of the TOF camera of the robot at the position R, an included angle (an included angle between a line segment Ra and a line segment RP) formed by an end point a of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a first acute angle, an included angle (an included angle between a line segment Rb and the line segment RP) formed by an end point b of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a second acute angle, an included angle (an included angle between a line segment Rc and the line segment RP) formed by an end point c of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a third acute angle, the angles and values of the three acute angles are smaller than 90 degrees, and the position R is determined to be outside the virtual rectangular frame abcd, so that collision warning signals are not triggered, and obstacle avoidance actions are not needed to be executed.
Embodiment two:
as shown in fig. 2, the robot walks to the position R1, and the virtual rectangular frame abcd is already framed and is the same as the embodiment, and is used for framing the outline of the same target obstacle; the method comprises the steps of carrying out a first treatment on the surface of the The current walking direction of the robot is P1, namely the ray R1P1 corresponding to FIG. 2, which faces to a target obstacle in a field area in front of a robot body; for the current view field area of the TOF camera of the robot at the position R1, an included angle (an included angle between a line segment R1a and a line segment R1P 1) formed by an end point a of the virtual rectangular frame abcd relative to the current walking direction P1 of the robot is a right angle, an included angle (an included angle between a line segment R1b and a line segment R1P 1) formed by an end point b of the virtual rectangular frame abcd relative to the current walking direction P1 of the robot is an acute angle, an included angle (an included angle between a line segment R1c and a line segment R1P 1) formed by an end point c of the virtual rectangular frame abcd relative to the current walking direction P1 of the robot is an acute angle, the angles and values of the three included angles are larger than 90 degrees, and the position R1 is determined to be inside the virtual rectangular frame abcd. Then, it is determined that an included angle formed by a line R1O connecting a body center R1 of the robot and a center O of the virtual rectangular frame abcd and a current traveling direction P1 (ray R1P 1) of the robot is an acute angle (less than 90 degrees), it is determined that the current traveling direction of the robot tends to collide with the target obstacle, and the robot continues to travel along the current traveling direction P1 to avoid the target obstacle, as shown by a movement trend of the robot shown in fig. 2, the robot at the position R1 triggers a collision warning signal and instructs the robot to start performing an obstacle avoidance action or an obstacle avoidance action.
Embodiment III:
as shown in fig. 3, the robot walks to the position R2 according to a preset working mode, and the virtual rectangular frame abcd is already framed and is the same as the embodiment, and is used for framing the outline of the same target obstacle; the current walking direction of the robot is P2, namely the ray R2P2 corresponding to FIG. 3, and the front of the robot body does not face the target obstacle; for the current view field area of the TOF camera of the robot at the position R2, an included angle (an included angle between a line segment R2a and a line segment R2P 2) formed by an end point a of the virtual rectangular frame abcd relative to the current walking direction P2 of the robot is an obtuse angle, an included angle (an included angle between a line segment R2b and a line segment R2P 2) formed by an end point b of the virtual rectangular frame abcd relative to the current walking direction P2 of the robot is an obtuse angle, an included angle (an included angle between a line segment R2c and a line segment R2P 2) formed by an end point c of the virtual rectangular frame abcd relative to the current walking direction P2 of the robot is an obtuse angle, the angles and values of the three obtuse angles are greater than 90 degrees, and the position R2 is determined to be inside the virtual rectangular frame abcd. Then, it is determined that an included angle formed by a line R2O connecting a body center R2 of the robot and a center O of the virtual rectangular frame abcd and a current traveling direction P2 (ray R2P 2) of the robot is an obtuse angle (more than 90 degrees), it is determined that the current traveling direction of the robot does not tend to collide with the target obstacle, but proceeds toward a direction away from the center O of the target obstacle, as shown by a movement tendency of the robot shown in fig. 3, and the robot at the position R2 does not trigger a collision warning signal.
In addition, if the outline of the target obstacle is not all within the view angle range of the TOF camera and/or within the effective ranging range of the TOF camera, in particular, if the target obstacle is too large, a partial area of the target obstacle is caused to be outside the view angle range of the TOF camera and/or outside the effective ranging range of the TOF camera, and depth information of the partial area is not acquired by the TOF camera. However, the present embodiment ensures that the center of the virtual rectangular frame set by the robot is still the center of the target obstacle. Thereby improving the robustness of collision pre-warning of the robot located inside the virtual rectangular frame.
A vision chip stores a program corresponding to the obstacle collision warning method based on depth data, and is used for controlling a robot to trigger a collision warning signal before touching a target obstacle in an overlapping area of a visual angle range of a TOF camera and an effective ranging range of the TOF camera. According to the technical scheme, the shape and the range of the obstacle are analyzed on the basis of acquiring the contour depth information output by the TOF camera, and the collision warning signal is triggered when the robot is close to the obstacle.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (7)

1. An obstacle collision warning method based on depth data, comprising:
calculating and obtaining the actual physical size of the target obstacle according to the depth image of the outline of the target obstacle, the depth information of the target obstacle and the internal and external parameters of the TOF camera, which are acquired currently by the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the advancing plane of the robot;
when the robot walks to the inside of the virtual rectangular frame and the current walking direction of the robot is detected to be the trend of collision target obstacle, the robot is controlled to trigger a collision warning signal;
the judging step that the robot walks to the inside of the virtual rectangular frame comprises the following steps:
judging that the sum of angles of included angles formed by three different endpoints of the virtual rectangular frame relative to the current walking direction of the robot is smaller than 90 degrees, if yes, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot walks into the virtual rectangular frame;
the actual physical size of the target obstacle comprises coordinate information of four different endpoints of the virtual rectangular frame;
the included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: the line connecting the end point and the center of the robot body forms a deflection angle relative to the current walking direction of the robot.
2. The obstacle collision warning method according to claim 1, wherein the step of determining that the current traveling direction of the robot is toward the collision target obstacle after the robot is inside the virtual rectangular frame includes:
judging whether an included angle formed by a connecting line of the body center of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle.
3. The obstacle collision warning method according to claim 1, wherein the coordinate range of the end point of the virtual rectangular frame is limited based on the horizontal distance of the leftmost side of the target obstacle from the body center of the robot, the horizontal distance of the rightmost side of the target obstacle from the body center of the robot, such that the virtual rectangular frame is used to enclose the outline of the target obstacle on the traveling plane of the robot; determining coordinates of four different endpoints of the virtual rectangular frame relative to the center of the robot body so that the center of the virtual rectangular frame is the center of the target obstacle;
the acquired target obstacle is in the current view field area of the TOF camera and is positioned in front of the robot.
4. The obstacle collision warning method of claim 3, wherein the depth image of the outline of the target obstacle comprises: filtering depth image data acquired by a TOF camera and analyzing connected domain to obtain segmented image contour coordinate information;
the depth image data acquired by the TOF camera is depth image data of a target obstacle in an effective ranging range of the TOF camera and a visual angle range of the TOF camera.
5. The obstacle collision warning method according to claim 4, wherein the image contour coordinate information of the target obstacle is converted from the imaging plane of the TOF camera into a world coordinate system using a triangle principle on the basis of acquiring the depth image of the contour of the target obstacle, the depth information of the target obstacle, and the internal and external parameters of the TOF camera, wherein the result of the conversion includes: and in the overlapping area of the visual angle range and the effective distance measurement range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the body center of the robot and the horizontal distance between the rightmost side of the target obstacle and the body center of the robot.
6. The obstacle collision warning method according to claim 5, wherein if the outline of the target obstacle is not entirely within the view angle range of the TOF camera and/or within the effective range of the TOF camera, the center of the virtual rectangular frame set by the robot remains the center of the target obstacle.
7. A vision chip, characterized in that the vision chip stores a program corresponding to the depth data-based obstacle collision warning method according to any one of claims 1 to 6, for controlling a robot to trigger a collision warning signal before touching a target obstacle in an effective view field area of a TOF camera.
CN202011336291.3A 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip Active CN112308033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011336291.3A CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011336291.3A CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Publications (2)

Publication Number Publication Date
CN112308033A CN112308033A (en) 2021-02-02
CN112308033B true CN112308033B (en) 2024-04-05

Family

ID=74336021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011336291.3A Active CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Country Status (1)

Country Link
CN (1) CN112308033B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690374B (en) * 2023-01-03 2023-04-07 江西格如灵科技有限公司 Interaction method, device and equipment based on model edge ray detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103253263A (en) * 2012-02-17 2013-08-21 现代摩比斯株式会社 Apparatus and method detectinc obstacle and alerting collision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2016076449A1 (en) * 2014-11-11 2016-05-19 Movon Corporation Method and system for detecting an approaching obstacle based on image recognition
CN109344687A (en) * 2018-08-06 2019-02-15 深圳拓邦股份有限公司 The obstacle detection method of view-based access control model, device, mobile device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103253263A (en) * 2012-02-17 2013-08-21 现代摩比斯株式会社 Apparatus and method detectinc obstacle and alerting collision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2016076449A1 (en) * 2014-11-11 2016-05-19 Movon Corporation Method and system for detecting an approaching obstacle based on image recognition
CN109344687A (en) * 2018-08-06 2019-02-15 深圳拓邦股份有限公司 The obstacle detection method of view-based access control model, device, mobile device

Also Published As

Publication number Publication date
CN112308033A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
AU2017228620B2 (en) Autonomous coverage robot
US20230305573A1 (en) Method for detecting obstacle, self-moving robot, and non-transitory computer readable storage medium
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
CN111337022B (en) Target obstacle detection method and device and robot
CN112363513B (en) Obstacle classification obstacle avoidance control method based on depth information
CN110974091A (en) Cleaning robot, control method thereof, and storage medium
JP6030405B2 (en) Planar detection device and autonomous mobile device including the same
CN107443385B (en) Detection method and chip for robot linear navigation based on vision and robot
CN111897335A (en) Obstacle avoidance control method and control system for robot walking in Chinese character' gong
CN112327879A (en) Edge obstacle avoidance method based on depth information
CN111857155A (en) Robot control method
CN112308033B (en) Obstacle collision warning method based on depth data and visual chip
US11960296B2 (en) Method and apparatus for autonomous mobile device
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
CN113848944A (en) Map construction method and device, robot and storage medium
Poomarin et al. Automatic docking with obstacle avoidance of a differential wheel mobile robot
WO2022156260A1 (en) Autonomous mobile device
JP6156793B2 (en) POSITION ESTIMATION DEVICE, POSITION ESTIMATION PROGRAM, AND POSITION ESTIMATION METHOD
Sun et al. Detection and state estimation of moving objects on a moving base for indoor navigation
AU2015224421B2 (en) Autonomous coverage robot
CN118078153A (en) Robot obstacle avoidance cleaning method and robot capable of avoiding obstacle to be cleaned
WO2023078318A1 (en) Laser point-based robot suspension determining method, map update method, and chip
AU2013338354B9 (en) Autonomous coverage robot
AU2023201499A1 (en) Method and apparatus for detecting obstacle, self-moving robot, and storage medium
CN118058658A (en) Movement control method of cleaning robot and cleaning robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant