CN112308033A - Obstacle collision warning method based on depth data and visual chip - Google Patents

Obstacle collision warning method based on depth data and visual chip Download PDF

Info

Publication number
CN112308033A
CN112308033A CN202011336291.3A CN202011336291A CN112308033A CN 112308033 A CN112308033 A CN 112308033A CN 202011336291 A CN202011336291 A CN 202011336291A CN 112308033 A CN112308033 A CN 112308033A
Authority
CN
China
Prior art keywords
robot
target obstacle
rectangular frame
obstacle
virtual rectangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011336291.3A
Other languages
Chinese (zh)
Other versions
CN112308033B (en
Inventor
戴剑锋
赖钦伟
肖刚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202011336291.3A priority Critical patent/CN112308033B/en
Publication of CN112308033A publication Critical patent/CN112308033A/en
Application granted granted Critical
Publication of CN112308033B publication Critical patent/CN112308033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an obstacle collision warning method and a visual chip based on depth data, wherein the obstacle collision warning method comprises the following steps: calculating and acquiring the actual physical size of the target obstacle according to the depth image of the contour of the target obstacle acquired by the TOF camera at present, the depth information of the target obstacle and the internal and external parameters of the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the traveling plane of the robot; when the robot walks to the inside of the virtual rectangular frame and detects that the current walking direction of the robot is a trend of collision target obstacles, the robot is controlled to trigger a collision warning signal. The rectangular frame with collision early warning significance is arranged on the basis of the actual physical size of the target obstacle, so that the robot can avoid the collision obstacle in a necessary position area in advance, and the influence of the target obstacle on the normal work of the robot is reduced.

Description

Obstacle collision warning method based on depth data and visual chip
Technical Field
The invention relates to the technical field of early warning collision of intelligent robots, in particular to a depth data-based obstacle collision warning method and a visual chip.
Background
At present, SLAM robots based on inertial navigation, vision and laser are more and more popular, a family sweeping cleaning robot is relatively strong in representativeness, the indoor environment is positioned and a map is built in real time by combining the data of the vision, the laser, a gyroscope, acceleration and a wheel odometer, and then positioning navigation is realized according to the built map. However, the pain point at present is that when the robot collides with an obstacle of this type, the robot pushes the obstacle or is wound by the obstacle of the electric wire type. Therefore, before performing the obstacle avoidance operation, it is necessary to give a collision warning signal to the robot, but in the prior art, chinese patent CN110622085A relies on the feedback of the physical collision sensor to obtain the warning signal in the process of decelerating to approach the target obstacle, so that the robot is likely to physically collide with the target obstacle, and further normal operation of the robot is affected.
Disclosure of Invention
In order to solve the technical problems, the invention discloses an obstacle collision warning method based on depth data, which simply and effectively warns that a robot is collided in advance, and the specific technical scheme is as follows:
an obstacle collision warning method based on depth data, comprising: calculating and acquiring the actual physical size of the target obstacle according to the depth image of the contour of the target obstacle acquired by the TOF camera at present, the depth information of the target obstacle and the internal and external parameters of the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the traveling plane of the robot; when the robot walks to the inside of the virtual rectangular frame and detects that the current walking direction of the robot is tending to collide with the target obstacle, the robot is controlled to trigger a collision warning signal.
Compared with the prior art, the technical scheme is that the rectangular frame with collision early warning significance is set on the basis of the actual physical size of the target obstacle, and the collision warning signal of the robot is triggered in the rectangular frame, so that the robot avoids the collision obstacle in a necessary position area in advance, and the influence of the target obstacle on the normal work of the robot is reduced.
Further, the step of judging that the robot walks to the inside of the virtual rectangular frame includes: judging whether included angles formed by three different end points of the virtual rectangular frame relative to the current walking direction of the robot are acute angles, if so, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot has walked into the virtual rectangular frame; the actual physical size of the target obstacle comprises coordinate information of four different end points of the virtual rectangular frame, namely coordinates of the four different end points relative to the center of the body of the robot; wherein, an included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: and the connection line of the end point of the virtual rectangular frame and the center of the body of the robot forms a deflection angle relative to the current walking direction of the robot.
According to the technical scheme, the relative angle position relation between different end points of the virtual rectangular frame and the real-time pose of the robot is utilized to judge that the robot walks into the virtual rectangular frame.
Further, the step of judging that the current walking direction of the robot tends to collide with the target obstacle after the robot is inside the virtual rectangular frame comprises: and judging whether an included angle formed by a connecting line of the center of the body of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle. According to the technical scheme, the relative angle relation between the center of the virtual rectangular frame and the real-time pose of the robot is utilized to judge whether the motion trend of the robot located in the virtual rectangular frame collides with the upper target obstacle or not, and no safety door limit value is set to limit the motion of the robot.
Further, a virtual rectangular frame for a depth image of the outline surrounding the target obstacle is set based on a horizontal distance between the leftmost side of the target obstacle and the body center of the robot and a horizontal distance between the rightmost side of the target obstacle and the body center of the robot, and coordinates of four different end points of this virtual rectangular frame with respect to the body center of the robot are determined so that the center of the virtual rectangular frame is the center of the target obstacle.
Further, the depth image of the contour of the target obstacle is image contour coordinate information which is obtained by filtering and processing depth image data collected by a TOF camera and analyzing a connected domain to segment the image contour coordinate information; the depth image data collected by the TOF camera is the depth image data of the target obstacle in the effective distance measurement range of the TOF camera and the visual angle range of the TOF camera. Thereby analyzing the shape and the horizontal ground coverage of the target obstacle.
Further, on the basis of obtaining a depth image of the contour of the target obstacle, depth information of the target obstacle and internal and external parameters of a TOF camera, converting image contour coordinate information of the target obstacle into a world coordinate system by an imaging plane of the TOF camera by utilizing a trigonometric principle; wherein the conversion result comprises: and in an overlapping area of the visual angle range and the effective ranging range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the center of the body of the robot, the horizontal distance between the rightmost side of the target obstacle and the center of the body of the robot, and the longitudinal height information of the target obstacle. The technical scheme reduces the three-dimensional profile characteristics of the target obstacle, and is beneficial to detecting the 3-dimensional coordinate information around the target obstacle, so that the obstacle condition in front of the robot can be positioned.
Further, if the outline of the target obstacle is not all within the range of the view angle of the TOF camera and/or within the effective range finding range of the TOF camera, the center of the virtual rectangular frame set by the robot is still the center of the target obstacle. In the technical scheme, if the target obstacle is too large, a partial region of the target obstacle is out of the range of the view angle of the TOF camera and/or out of the effective ranging range of the TOF camera, the depth information of the partial region is not acquired by the TOF camera, and the center of the set virtual rectangular frame is still the center of the target obstacle, so that the robustness of collision early warning of the robot located inside the virtual rectangular frame is improved.
The visual chip stores a program corresponding to the obstacle collision warning method based on the depth data, and is used for controlling a robot to trigger a collision warning signal before touching a target obstacle in an overlapping area of a view angle range of a TOF camera and an effective distance measuring range of the TOF camera. According to the technical scheme, the shape and the range of the barrier are analyzed on the basis of collecting the profile depth information output by the TOF camera, and the collision warning signal is triggered when the robot is close enough to the barrier.
Drawings
Fig. 1 is a schematic diagram of a robot that collects a depth image of an outline of a target obstacle (a center point O) and sets a virtual rectangular frame abcd for surrounding the target obstacle according to an embodiment of the present invention, where a real-time position R of the robot in fig. 1 is not within the virtual rectangular frame abcd.
Fig. 2 is a schematic diagram of the robot disclosed in the second embodiment of the present invention, in which the real-time position R1 is within the virtual rectangular frame abcd and the current walking direction P1 of the robot tends to collide with the target obstacle.
Fig. 3 is a schematic diagram of the robot disclosed in the third embodiment of the present invention, in which the real-time position R2 is within the virtual rectangular frame abcd and the current walking direction P2 of the robot does not tend to collide with the target obstacle.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. It should be noted that, in the present application, the full text of chinese patent CN111624997A is introduced into the text of the present application, and based on the calculation method of the triangle principle of CN111624997A, the depth information collected from the TOF camera is completed by using the internal and external parameters of the TOF camera to calculate the relative coordinate position of the target obstacle within the field of view of the TOF camera, the longitudinal height information of the space occupied by the target obstacle, and the horizontal distance (contour width) between the leftmost side of the target obstacle and the rightmost side of the target obstacle.
The embodiment of the invention discloses an obstacle collision warning method based on depth data, which comprises the following steps: calculating and acquiring the actual physical size of the target obstacle according to the depth image of the contour of the target obstacle acquired by the TOF camera at present, the depth information of the target obstacle and the internal and external parameters of the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the traveling plane of the robot; then, when the robot walks inside this virtual rectangular frame and detects that the current walking direction of the robot is a tendency to have a collision target obstacle, the robot is controlled to trigger a collision warning signal. Compared with the prior art, the method has the advantages that the rectangular frame with collision early warning significance is set on the basis of the actual physical size of the target obstacle, the collision warning signal of the robot is triggered in the rectangular frame, the robot is enabled to avoid the collision obstacle in a necessary position area in advance, and the influence of the target obstacle on the normal work of the robot is reduced. And remind the robot to re-plan the work path.
Specifically, the step of determining that the robot walks inside the virtual rectangular frame includes: judging the angle sum of included angles formed by three different end points of the virtual rectangular frame relative to the current walking direction of the robot to be smaller than 90 degrees, if so, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot has walked into the virtual rectangular frame, namely, when the angle sum of included angles formed by the three different end points of the virtual rectangular frame relative to the current walking direction of the robot is larger than or equal to 90 degrees, determining that the robot has entered into the virtual rectangular frame, and if needing to be noted, the actual physical size of the target obstacle comprises coordinate information of four different end points of the virtual rectangular frame. The theoretical basis for judging whether the robot walks into the virtual rectangular frame is from the circumferential angle theorem, wherein the virtual rectangular frame has an outer circle, and when the virtual rectangular frame has an angle sum of included angles formed by three different end points relative to the current walking direction of the robot and is equal to 90 degrees, the robot starts to enter the virtual rectangular frame. Wherein, an included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: the connection line of the end point and the center of the robot body forms a deflection angle relative to the current walking direction of the robot. In the embodiment, the relative angle position relationship between different end points of the virtual rectangular frame and the real-time pose of the robot is used for judging that the robot walks into the virtual rectangular frame.
Specifically, the step of judging that the current walking direction of the robot is prone to collide with the target obstacle after the robot is inside the virtual rectangular frame (including being located on the rectangular side of the virtual rectangular frame) comprises: and judging whether an included angle formed by a connecting line of the center of the body of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle. In the embodiment, the relative angle relationship between the center of the virtual rectangular frame and the real-time pose of the robot is utilized to judge whether the motion trend of the robot in the virtual rectangular frame collides with the target obstacle.
The depth image is also referred to as a range image, and refers to an image in which a distance between each pixel point of the depth image and a corresponding obstacle actual measurement point is taken as a pixel value. Wherein the deflection angle between each pixel point and the corresponding measurement point is determined based on the setting parameters of the imaging device. The depth image directly reflects the geometric shape outline of the visible surface of each obstacle in the shot physical scene, and the depth image can be converted into spatial point cloud data through coordinate conversion. And all the obstacles described by the depth data in the depth image can be used as images of the obstacles to be identified for subsequent processing. Wherein the obstacle shall be taken to broadly include an object temporarily placed on a traveling plane and an object that is not easily moved. The traveling plane of the robot includes, but is not limited to, cement floor, painted floor, composite floor, solid wood floor, carpet floor, table top, glass surface, etc. according to the actual application environment. Examples of the object temporarily placed on the traveling plane include objects such as a doorsill (capable of crossing), a toy (collision prohibition), a wire (crossing prohibition), and the like; examples of objects that are not easily moved include sofas (the machine cannot be controlled to enter when the height of the sofa bottom is lower than the height of the machine), walls, etc.
The first embodiment is as follows:
as shown in fig. 1, when the robot walks to a position R, a target obstacle in fig. 1 is in a view field region of a TOF camera arranged at the front end of the robot, namely, in an overlapping region of an effective distance measurement range of the TOF camera and a view angle range of the TOF camera, and the TOF camera acquires a depth image of a contour of the target obstacle, wherein the depth image of the contour of the target obstacle is an image contour obtained by filtering depth image data of the target obstacle acquired by the TOF camera and analyzing a connected domain to segment the image contour, and the image contour comprises image contour coordinate information of the target obstacle; and controlling the contour coordinate information of the images to calculate the contour width of the target obstacle by combining the internal and external parameters of the TOF camera, wherein the contour width comprises a horizontal distance L _ L between the leftmost side and the center of the robot body, a horizontal distance R _ L between the rightmost side of the target obstacle and the center of the robot body, and a longitudinal height H of the target obstacle. In this embodiment, based on a horizontal distance L _ L between the leftmost side of the target obstacle and the center of the robot body and a horizontal distance R _ L between the rightmost side of the target obstacle and the center of the robot body, the coordinate range of the end point a, the coordinate range of the end point b, the coordinate range of the end point c, and the coordinate range of the end point d are limited, and then the position range of the virtual rectangular frame abcd is determined, a virtual rectangular frame abcd capable of surrounding the outline of the target obstacle on the travel plane of the robot is obtained by framing, and the coordinates of four different end points of the virtual rectangular frame with respect to the center of the robot body can be determined, and the coordinates of the center point O of the virtual rectangular frame abcd can be calculated so that the center O of the virtual rectangular frame is the center O of the target obstacle; and the coordinate range of the endpoint a, the coordinate range of the endpoint b, the coordinate range of the endpoint c and the coordinate range of the endpoint d are positioned outside two ends of the target obstacle in the view field area of the TOF camera. Therefore, the shape and the horizontal ground coverage range of the target obstacle are analyzed, the three-dimensional profile characteristics of the target obstacle are reduced, 3-dimensional coordinate information around the target obstacle can be detected, and the obstacle condition in front of the robot can be positioned.
On the basis of obtaining the depth image of the contour of the target obstacle, the depth information of the target obstacle and the internal and external parameters of the TOF camera, converting the image contour coordinate information of the target obstacle into a world coordinate system by an imaging plane of the TOF camera by using a trigonometric principle; wherein the conversion result comprises: and in an overlapping area of the visual angle range and the effective ranging range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the center of the body of the robot, the horizontal distance between the rightmost side of the target obstacle and the center of the body of the robot, and the longitudinal height information of the target obstacle. The three-dimensional contour feature of the target obstacle is restored, and 3-dimensional coordinate information around the target obstacle can be detected, so that the obstacle condition in front of the robot can be positioned.
As shown in fig. 1, the current walking direction of the robot is P, i.e. corresponding to the ray RP of fig. 1, towards the target obstacle in front of the body of the robot; in the current view field area of the TOF camera of the robot at the position R, an included angle (an included angle between the line segment Ra and the line segment RP) formed by the end point a of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a first acute angle, an included angle (an included angle between the line segment Rb and the line segment RP) formed by the end point b of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a second acute angle, an included angle (an included angle between the line segment Rc and the line segment RP) formed by the end point c of the virtual rectangular frame abcd relative to the current walking direction P of the robot is a third acute angle, the sum of the angles of the three acute angles is less than 90 degrees, and the position R is determined to be outside the virtual rectangular frame abcd, so that a collision warning signal is not triggered, and no obstacle avoidance action is required.
Example two:
as shown in fig. 2, when the robot walks to the position R1, the virtual rectangular frame abcd is already framed and is used to frame the outline of the same target obstacle, as in the example; (ii) a The current walking direction of the robot is P1, i.e. corresponding to ray R1P1 of fig. 2, towards the target obstacle in the field of view in front of the body of the robot; for the current field of view area of the TOF camera of the robot at the position R1, the included angle formed by the end point a of the virtual rectangular frame abcd with respect to the current robot walking direction P1 (the included angle between the line segment R1a and the line segment R1P 1) is a right angle, the included angle formed by the end point b of the virtual rectangular frame abcd with respect to the current robot walking direction P1 (the included angle between the line segment R1b and the line segment R1P 1) is an acute angle, the included angle formed by the end point c of the virtual rectangular frame abcd with respect to the current robot walking direction P1 (the included angle between the line segment R1c and the line segment R1P 1) is an acute angle, the sum of the angles of these three included angles is greater than 90 degrees, and it is determined that the position R1 is inside the virtual rectangular frame abcd. Then, judging that an included angle formed by a connecting line R1O between the body center R1 of the robot and the center O of the virtual rectangular frame abcd and the current walking direction P1 (a ray R1P 1) of the robot is an acute angle (smaller than 90 degrees), determining that the current walking direction of the robot tends to collide with the target obstacle, and determining that the robot does not avoid the target obstacle when continuously walking along the current walking direction P1, wherein the robot at the position R1 triggers a collision warning signal and indicates that the robot starts to perform obstacle avoidance action or obstacle detouring action as shown by the movement trend of the robot shown in FIG. 2.
Example three:
as shown in fig. 3, the robot walks to a position R2 according to a preset working mode, and the virtual rectangular frame abcd is already framed and, like the embodiment, is used for framing the outline of the same target obstacle; the current walking direction of the robot is P2, i.e. corresponding to ray R2P2 of fig. 3, the front of the body of the robot is not facing the target obstacle; for the current field of view area of the TOF camera of the robot at the position R2, the included angle formed by the end point a of the virtual rectangular frame abcd with respect to the current walking direction P2 of the robot (the included angle between the line segment R2a and the line segment R2P 2) is an obtuse angle, the included angle formed by the end point b of the virtual rectangular frame abcd with respect to the current walking direction P2 of the robot (the included angle between the line segment R2b and the line segment R2P 2) is an obtuse angle, the included angle formed by the end point c of the virtual rectangular frame abcd with respect to the current walking direction P2 of the robot (the included angle between the line segment R2 35 2c and the line segment R2P 2) is an obtuse angle, the sum of the angles of the three obtuse angles is greater than 90 degrees, and the position R2 is determined to be inside the virtual rectangular. Then, if it is determined that the angle formed by the line R2O connecting the body center R2 of the robot and the center O of the virtual rectangular frame abcd and the current walking direction P2 of the robot (the ray R2P 2) is an obtuse angle (greater than 90 degrees), it is determined that the current walking direction of the robot does not tend to collide with the target obstacle but advances in a direction away from the center O of the target obstacle, as shown by the movement tendency of the robot shown in fig. 3, the robot at the position R2 does not trigger a collision warning signal.
In addition, if the outline of the target obstacle is not all within the viewing angle range of the TOF camera and/or within the effective ranging range of the TOF camera, especially if the target obstacle is too large, a partial region of the target obstacle is caused to be out of the viewing angle range of the TOF camera and/or out of the effective ranging range of the TOF camera, and the depth information of the partial region is not acquired by the TOF camera. However, the present embodiment ensures that the center of the virtual rectangular frame set by the robot remains the center of the target obstacle. Thereby improving the robustness of collision early warning of the robot positioned in the virtual rectangular frame.
The visual chip stores a program corresponding to the obstacle collision warning method based on the depth data, and is used for controlling a robot to trigger a collision warning signal before touching a target obstacle in an overlapping area of a view angle range of a TOF camera and an effective distance measuring range of the TOF camera. According to the technical scheme, the shape and the range of the barrier are analyzed on the basis of collecting the profile depth information output by the TOF camera, and the collision warning signal is triggered when the robot is close enough to the barrier.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (8)

1. An obstacle collision warning method based on depth data, comprising:
calculating and acquiring the actual physical size of the target obstacle according to the depth image of the contour of the target obstacle acquired by the TOF camera at present, the depth information of the target obstacle and the internal and external parameters of the TOF camera, and setting a virtual rectangular frame for surrounding the target obstacle on the basis, wherein the virtual rectangular frame is positioned on the traveling plane of the robot;
when the robot walks to the inside of the virtual rectangular frame and detects that the current walking direction of the robot is a trend of collision target obstacles, the robot is controlled to trigger a collision warning signal.
2. The obstacle collision warning method according to claim 1, wherein the step of judging that the robot has traveled inside the virtual rectangular frame includes:
judging the angle sum of included angles formed by three different end points of the virtual rectangular frame relative to the current walking direction of the robot is smaller than 90 degrees, if so, determining that the robot does not walk into the virtual rectangular frame, otherwise, determining that the robot has walked into the virtual rectangular frame;
wherein the actual physical dimensions of the target obstacle comprise coordinate information of four different endpoints of the virtual rectangular box;
wherein, an included angle formed by one end point of the virtual rectangular frame relative to the current walking direction of the robot is as follows: the connection line of the end point and the center of the robot body forms a deflection angle relative to the current walking direction of the robot.
3. The obstacle collision warning method according to claim 2, wherein the step of determining that the current walking direction of the robot is a direction tending to collide with the target obstacle after the robot is inside the virtual rectangular frame comprises:
and judging whether an included angle formed by a connecting line of the center of the body of the robot and the center of the virtual rectangular frame and the current walking direction of the robot is an acute angle, if so, determining that the current walking direction of the robot tends to collide with the target obstacle, otherwise, determining that the current walking direction of the robot does not tend to collide with the target obstacle.
4. The obstacle collision warning method according to any one of claims 1 to 3, wherein a coordinate range of an end point of the virtual rectangular frame is limited based on a horizontal distance of a leftmost side of the target obstacle from a body center of the robot and a horizontal distance of a rightmost side of the target obstacle from the body center of the robot, so that the virtual rectangular frame is used to enclose a contour of the target obstacle on a travel plane of the robot; determining coordinates of four different end points of the virtual rectangular frame relative to the body center of the robot so that the center of the virtual rectangular frame is the center of the target obstacle;
the acquired target obstacle is in the current view field area of the TOF camera and is located in front of the robot.
5. The obstacle collision warning method according to claim 4, wherein the depth image of the outline of the target obstacle includes: carrying out filtering processing and connected domain analysis on depth image data acquired by a TOF camera to obtain segmented image contour coordinate information;
the depth image data collected by the TOF camera is the depth image data of the target obstacle in the effective distance measurement range of the TOF camera and the visual angle range of the TOF camera.
6. The obstacle collision warning method according to claim 5, wherein the image contour coordinate information of the target obstacle is converted into a world coordinate system from an imaging plane of a TOF camera by using a principle of triangulation on the basis of obtaining a depth image of the contour of the target obstacle, depth information of the target obstacle, and internal and external parameters of the TOF camera, wherein the conversion result comprises: and in an overlapping area of the visual angle range and the effective ranging range of the TOF camera, the horizontal distance between the leftmost side of the target obstacle and the center of the body of the robot, and the horizontal distance between the rightmost side of the target obstacle and the center of the body of the robot.
7. The obstacle collision warning method according to claim 6, wherein if the outline of the target obstacle is not all within the range of view angle of the TOF camera and/or within the range of effective distance measurement of the TOF camera, the center of the virtual rectangular frame set by the robot remains the center of the target obstacle.
8. A vision chip, characterized in that the vision chip stores a program corresponding to the obstacle collision warning method based on depth data according to any one of claims 1 to 7, and is used for controlling a robot to trigger a collision warning signal before touching a target obstacle in an effective view field area of a TOF camera.
CN202011336291.3A 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip Active CN112308033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011336291.3A CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011336291.3A CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Publications (2)

Publication Number Publication Date
CN112308033A true CN112308033A (en) 2021-02-02
CN112308033B CN112308033B (en) 2024-04-05

Family

ID=74336021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011336291.3A Active CN112308033B (en) 2020-11-25 2020-11-25 Obstacle collision warning method based on depth data and visual chip

Country Status (1)

Country Link
CN (1) CN112308033B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690374A (en) * 2023-01-03 2023-02-03 江西格如灵科技有限公司 Interaction method, device and equipment based on model edge ray detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103253263A (en) * 2012-02-17 2013-08-21 现代摩比斯株式会社 Apparatus and method detectinc obstacle and alerting collision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2016076449A1 (en) * 2014-11-11 2016-05-19 Movon Corporation Method and system for detecting an approaching obstacle based on image recognition
CN109344687A (en) * 2018-08-06 2019-02-15 深圳拓邦股份有限公司 The obstacle detection method of view-based access control model, device, mobile device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103253263A (en) * 2012-02-17 2013-08-21 现代摩比斯株式会社 Apparatus and method detectinc obstacle and alerting collision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2016076449A1 (en) * 2014-11-11 2016-05-19 Movon Corporation Method and system for detecting an approaching obstacle based on image recognition
CN109344687A (en) * 2018-08-06 2019-02-15 深圳拓邦股份有限公司 The obstacle detection method of view-based access control model, device, mobile device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690374A (en) * 2023-01-03 2023-02-03 江西格如灵科技有限公司 Interaction method, device and equipment based on model edge ray detection

Also Published As

Publication number Publication date
CN112308033B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
US20230305573A1 (en) Method for detecting obstacle, self-moving robot, and non-transitory computer readable storage medium
CN112415998B (en) Obstacle classification obstacle avoidance control system based on TOF camera
CN107981790B (en) Indoor area dividing method and sweeping robot
Zhang et al. Visual-lidar odometry and mapping: Low-drift, robust, and fast
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR100877072B1 (en) Method and apparatus of building map for a mobile robot and cleaning simultaneously
EP3104194B1 (en) Robot positioning system
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
AU2017228620A1 (en) Autonomous coverage robot
EP4033324B1 (en) Obstacle information sensing method and device for mobile robot
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
CN111624997A (en) Robot control method and system based on TOF camera module and robot
CN112327879A (en) Edge obstacle avoidance method based on depth information
CN113331743A (en) Method for cleaning floor by cleaning robot and cleaning robot
US11960296B2 (en) Method and apparatus for autonomous mobile device
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
CN113610910B (en) Obstacle avoidance method for mobile robot
CN112308033B (en) Obstacle collision warning method based on depth data and visual chip
CN107647828A (en) The sweeping robot of fish-eye camera is installed
CN113848944A (en) Map construction method and device, robot and storage medium
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN114903374A (en) Sweeper and control method thereof
Roennau et al. Robust 3D scan segmentation for teleoperation tasks in areas contaminated by radiation
Sun et al. Detection and state estimation of moving objects on a moving base for indoor navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

GR01 Patent grant
GR01 Patent grant