CN110852312A - Cliff detection method, mobile robot control method, and mobile robot - Google Patents

Cliff detection method, mobile robot control method, and mobile robot Download PDF

Info

Publication number
CN110852312A
CN110852312A CN202010035730.0A CN202010035730A CN110852312A CN 110852312 A CN110852312 A CN 110852312A CN 202010035730 A CN202010035730 A CN 202010035730A CN 110852312 A CN110852312 A CN 110852312A
Authority
CN
China
Prior art keywords
area
cliff
preset
image
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010035730.0A
Other languages
Chinese (zh)
Other versions
CN110852312B (en
Inventor
龚凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Flyco Electrical Appliance Co Ltd
Original Assignee
Shenzhen Feike Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Feike Robot Co Ltd filed Critical Shenzhen Feike Robot Co Ltd
Priority to CN202010035730.0A priority Critical patent/CN110852312B/en
Publication of CN110852312A publication Critical patent/CN110852312A/en
Application granted granted Critical
Publication of CN110852312B publication Critical patent/CN110852312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A cliff detection method, comprising: collecting an image of a ground area in front of a robot body; determining a cliff area in the front ground area based on the depth image information and the brightness information; and determining that the vertical distance from a detection area to the image acquisition unit in the front ground area is greater than a preset vertical distance according to the depth image information, determining that the brightness of an image corresponding to the detection area is greater than or equal to a preset brightness value according to the brightness information of the acquired image, and determining that the detection area is a cliff area when the size of the detection area meets a preset size condition. According to the method, the cliff in the front ground area can be effectively detected. The application also provides a control method of the mobile robot and the mobile robot capable of realizing the method. The method and the device can effectively detect the cliff area in the front ground area and improve the accuracy of detecting the cliff area.

Description

Cliff detection method, mobile robot control method, and mobile robot
Technical Field
The present application relates to the field of automatic control, and in particular, to a cliff detection method, a control method for a mobile robot, and a mobile robot.
Background
A mobile robot refers to a machine device that can move and automatically perform work. In a working environment, sweeping environments with height differences (e.g. sweeping on stair edges, sweeping on table tops, etc.) are often encountered, i.e. cliff areas. Therefore, if the detection of the cliff area is inaccurate and a false determination occurs during the operation of the mobile robot, the mobile robot is likely to drop off the cliff.
Disclosure of Invention
In view of the above, the present invention provides a cliff detection method, a mobile robot control method, and a mobile robot, which can improve the accuracy of detecting a cliff area.
A first aspect provides a mobile robot comprising: the robot comprises a robot body, an image acquisition unit and a processing unit;
the image acquisition unit is used for acquiring an image of a ground area in front of the robot body;
the processing unit is used for extracting depth image information and brightness information of the acquired image;
the processing unit is further used for determining a cliff area in the front ground area based on the depth image information and the brightness information;
when the processing unit determines that the vertical distance from the detection area to the image acquisition unit is larger than the preset vertical distance in the front ground area according to the depth image information, determines that the brightness of the image corresponding to the detection area is larger than or equal to the preset brightness value according to the brightness information of the acquired image, and determines that the size of the detection area meets the preset size condition, the processing unit determines that the detection area is the cliff area.
In another possible implementation manner, the mobile robot further includes an infrared sensor, and when the processing unit determines that a vertical distance from a detection area to the image acquisition unit in the front ground area is greater than a preset vertical distance according to the depth image information, determines that brightness of an image corresponding to the detection area is smaller than a preset brightness value according to brightness information of the acquired image, and determines that infrared light intensity of the detection area detected by the infrared sensor is smaller than or equal to a first preset infrared light intensity, where the size of the detection area satisfies a predetermined size condition, the processing unit determines that the detection area is a cliff area.
In another possible implementation manner, the processing unit is specifically configured to determine that the size of the detection area satisfies a predetermined size condition when it is detected that the detection area can accommodate the projection of the driving wheel on the working surface; or
And the processing unit is specifically used for determining that the size of the detection area meets a preset size condition when the area of the detection area is detected to be larger than a preset area threshold.
In another possible implementation manner, when the processing unit determines that the brightness of the image corresponding to the detection area is greater than or equal to a preset brightness value according to the brightness information of the acquired image, and the infrared light intensity of the detection area detected by the infrared sensor is less than or equal to a second preset infrared light intensity, the processing unit determines that the detection area is a cliff area, where the first preset infrared light intensity is less than the second preset infrared light intensity.
In an embodiment of the application, an image acquisition unit acquires an image of a ground area in front of a robot body; the processing unit extracts depth image information of the acquired image; based on the depth image information, a cliff region in the front ground region is determined. The vertical distance from the detection area to the image acquisition unit can be determined based on the depth image information, and the vertical distance can be used to determine whether a cliff area exists in the front ground area, thereby effectively detecting the cliff area in the front ground area.
A second aspect provides a mobile robot comprising: the robot comprises a robot body, an image acquisition unit and a processing unit;
the image acquisition unit is used for acquiring an image of a ground area in front of the robot body;
the processing unit is used for extracting depth image information and brightness information of the acquired image;
the processing unit is further used for determining a cliff area in the front ground area based on the extracted depth image information and the extracted brightness information;
the processing unit is also used for controlling the motion behavior of the mobile robot according to the position relation between the robot body and the cliff area;
the processing unit is specifically used for controlling the mobile robot to stop moving or avoid the cliff or move along a motion path of the boundary of the cliff area when the fact that the distance between the robot body and the cliff area is smaller than or equal to a first preset horizontal distance is detected.
In one possible implementation form of the method,
and the processing unit is specifically used for controlling the mobile robot to perform deceleration motion when detecting that the distance between the robot body and the cliff area is smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance.
In another possible implementation form of the method,
and the processing unit is specifically used for generating a plurality of motion paths, scoring according to the position information of each motion path and the cliff area in the plurality of motion paths, and controlling the mobile robot to move along the motion path with the highest score.
In another possible implementation, the minimum distance of each motion path to the cliff area in the plurality of motion paths is inversely proportional to the fraction of the motion path.
In another possible implementation form of the method,
and the processing unit is specifically used for acquiring the historical path from the memory of the mobile robot and controlling the mobile robot to walk along the historical path when the motion path avoiding the cliff area is not acquired.
In an embodiment of the application, an image acquisition unit acquires an image of a ground area in front of a robot body; the processing unit extracts depth image information of the acquired image; a cliff area in the front ground area is specified based on the depth image information, and the motion behavior of the mobile robot is controlled based on the positional relationship between the robot body and the cliff area. The motion of the mobile robot is controlled through the positions of the robot body and the cliff area, so that the mobile robot is prevented from falling into the cliff area, and the missed scanning area is reduced.
A third aspect provides a cliff detection method, including:
collecting an image of a ground area in front of a robot body;
extracting depth image information and brightness information of the acquired image;
determining a cliff area in the front ground area based on the depth image information and the brightness information;
wherein determining the cliff region in the front ground region based on the depth image information and the luminance information comprises:
and when the vertical distance from the detection area to the image acquisition unit in the front ground area is determined to be greater than the preset vertical distance according to the depth image information, the brightness of the image corresponding to the detection area is determined to be greater than or equal to the preset brightness value according to the brightness information of the acquired image, and the size of the detection area meets the preset size condition, determining that the detection area is the cliff area.
In one possible implementation form of the method,
and when the vertical distance between the detection area and the image acquisition unit in the front ground area is determined to be within a preset vertical distance interval according to the depth image information, and the size of the detection area meets a preset size condition, determining that the detection area is the cliff area.
In another possible implementation form of the method,
when the vertical distance from the detection area to the image acquisition unit in the front ground area is determined to be larger than the preset vertical distance according to the depth image information, the brightness of the image corresponding to the detection area is determined to be smaller than the preset brightness value according to the brightness information of the acquired image, and the detected infrared light intensity of the detection area is smaller than or equal to the first preset infrared light intensity, wherein the size of the detection area meets the preset size condition, the detection area is determined to be the cliff area.
In another possible implementation manner, when the detection area is detected to be capable of accommodating the projection of the driving wheel on the working surface, determining that the size of the detection area meets a preset size condition; or
And the processing unit is specifically used for determining that the size of the detection area meets a preset size condition when the area of the detection area is detected to be larger than a preset area threshold.
In another possible implementation form of the method,
when the brightness of the image corresponding to the detection area is determined to be greater than or equal to a preset brightness value according to the brightness information of the acquired image, and the infrared light intensity of the detection area is less than or equal to a second preset infrared light intensity, determining that the detection area is a cliff area, wherein the first preset infrared light intensity is less than the second preset infrared light intensity.
A fourth aspect provides a control method of a mobile robot, including:
collecting an image of a ground area in front of a robot body, and extracting depth image information and brightness information of the collected image;
determining a cliff area in the front ground area based on the extracted depth image information and the extracted brightness information;
controlling the motion behavior of the mobile robot according to the position relation between the robot body and the cliff area;
wherein the controlling the motion behavior of the mobile robot according to the positional relationship between the robot body and the cliff area includes:
and when the horizontal distance between the robot body and the cliff area is detected to be smaller than or equal to a first preset horizontal distance, controlling the mobile robot to stop moving or avoid the cliff area or move along a motion path of the boundary of the cliff area.
In one possible implementation form of the method,
and when the horizontal distance between the robot body and the cliff area is detected to be smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance, controlling the mobile robot to perform deceleration motion.
In another possible implementation form of the method,
and generating a plurality of motion paths, scoring according to the position information of each motion path and the cliff area in the plurality of motion paths, and controlling the mobile robot to move along the motion path with the highest score.
In another possible implementation, the minimum horizontal distance of each motion path to the cliff area in the plurality of motion paths takes a value inversely proportional to the fraction of the motion path.
In another possible implementation form of the method,
when the motion path avoiding the cliff area is not acquired, the historical path is acquired from the memory of the mobile robot, and the mobile robot is controlled to travel along the historical path.
In another aspect, the present application provides a computer readable storage medium comprising program instructions for executing the method of the third or fourth aspect after being called.
According to the embodiment of the application, under the condition that the vertical distance from the detection area to the image acquisition unit in the front ground area is determined to be larger than the preset vertical distance according to the depth image information, the brightness of the image corresponding to the extracted detection area is further compared with the preset brightness value, so that the condition that the cliff is judged by mistake due to too low reflectivity can be reduced, and the accuracy of detecting the cliff is improved.
According to the embodiment of the application, after the cliff area is determined according to the depth image information and the brightness information of the detection area, when the horizontal distance between the robot body and the cliff area is smaller than or equal to the first preset horizontal distance, the fact that the mobile robot is very close to the cliff area is indicated, follow-up operation needs to be carried out to prevent the mobile robot from falling into the cliff area, and the inertia of the mobile robot can be reduced by setting the horizontal distance, so that the risk that the mobile robot falls into the cliff area due to insufficient speed reduction is reduced.
Drawings
FIG. 1A is a schematic diagram of a mobile robot in accordance with one embodiment of the present application;
FIG. 1B is another schematic diagram of a mobile robot according to one embodiment of the present application;
FIG. 2 is another schematic structural diagram of a mobile robot according to an embodiment of the present application;
FIG. 3 is a schematic flow diagram of a cliff detection method according to an embodiment of the present application;
FIG. 4 is a schematic view of a mobile robot and detection area according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a control method of a mobile robot according to an embodiment of the present application;
fig. 6 is a schematic diagram of horizontal and vertical distances of a mobile robot to a detection area according to an embodiment of the present application.
Detailed Description
Referring to fig. 1A and 1B, an embodiment of a mobile robot 100 provided by the present application includes: a robot body 110, an image pickup unit 120, a dust suction unit 130, a left wheel 141, and a right wheel 142.
The robot body 110 includes a processing unit 111, a memory 112, and a driving unit 113.
The image acquisition unit 120 may include, but is not limited to: tof (time of flight) image sensor 121, RGB sensor 122, or structured light image sensor 123.
The ToF image sensor 121 may measure a distance from the robot body 110 to a reference point of the front ground according to a time difference from emitting light to receiving light, and generate an image including depth information and brightness information of the front ground according to the distance.
The RGB sensor 122 can capture an RGB image of the ground in front, also referred to as a color image.
In one embodiment, the structured light image sensor 123 may include a spot projector and an infrared image receiver. The light spot projector emits infrared light spots containing space coding information to the ground in front, and the infrared light spots are reflected by the surface of an object and received by the infrared image receiver. The light spot image is modulated by objects with different distances, the distance information of the objects is contained, and the image containing the depth information and the brightness information of the front ground can be obtained after calculation.
In one embodiment, the structured light image sensor 123 may include a spot projector, an infrared image receiver, and an RGB image sensor. The light spot projector emits infrared light spots containing space coding information to the ground in front, and the infrared light spots are reflected by the surface of an object and received by the infrared image receiver. The light spot image is modulated by objects with different distances, the distance information of the objects is contained, and the image containing the depth information and the brightness information of the front ground can be obtained after calculation. The structured light image sensor 123 uses a known positional relationship of the camera (i.e., the structured light image sensor itself) to match the acquired RGB image with the depth image, so as to obtain a color image with depth information. In another mode, the processing unit 111 of the mobile robot 100 may match the acquired RGB image with the depth image using a known camera position relationship, thereby obtaining a color image with depth information.
It is understood that the image acquisition unit 120 may be at least one of the three sensors described above. The operating principle for the three sensors described above is well known to those skilled in the art and will not be described in detail here. However, the image capturing unit 120 is not limited to the above examples, and other image sensors familiar to those skilled in the art may be used in other embodiments.
The dust suction unit 130 may suck foreign substances such as dust or garbage. The suction unit 130 may further include a filter to filter the sucked objects.
Fig. 2 shows a connection relationship between the respective constituent units in the mobile robot 100. The processing unit 111 is connected to the memory 112, the driving unit 113, the image capturing unit 120, and the obstacle sensor 150 through a bus, respectively. The driving unit 113 is connected to the left wheel 141, the right wheel 142, and the dust suction unit 130, respectively.
The processing unit 111 includes, but is not limited to: a central processing unit, a singlechip, a digital signal processor, a microprocessor and the like. The memory 112 is used for storing instructions and data, and the processing unit 111 can read the instructions stored in the memory 112 to execute corresponding functions. The Memory 112 may include a Random Access Memory (RAM) and a Non-Volatile Memory (NVM). The nonvolatile Memory may include a Hard disk drive (Hard disk drive, HDD), a Solid State Drive (SSD), a Silicon Disk Drive (SDD), a Read-Only Memory (ROM), a Read-Only optical disk (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The obstacle sensor 150 is used to measure parameters such as the distance from the mobile robot to a nearby object, the height from the ground, and the like. The obstacle sensor 150 may include at least one of an infrared sensor, an ultrasonic sensor, a laser sensor, or a collision sensor, a camera. In the mobile robot shown in fig. 2, the obstacle sensor 150 and the image pickup unit 120 are separated. However, the obstacle sensor 150 may also be integrated in the image pickup unit 120 as an integral part of the image pickup unit 120.
Among them, the driving unit 113 may be a motor for applying a driving force. The driving unit 113 connects the dust suction unit 130, the left wheel 141, and the right wheel 142. The driving unit 113 may drive the dust suction unit 130, the left wheel 141, and the right wheel 142 under the control of the processing unit 111. Alternatively, the driving unit 113 includes a dust suction driving sub-unit connected to the dust suction unit 130, a left wheel driving sub-unit connected to the left wheel 141, and a right wheel driving sub-unit connected to the right wheel 142.
It is to be appreciated that in one or more embodiments, the mobile robot can also include an input-output unit, a position measurement unit, a power supply, a wireless communication unit, and the like.
In one embodiment of the mobile robot 100,
an image collecting unit 120 for collecting an image of a ground area in front of the robot body 110;
the processing unit 111 is configured to extract depth image information of the acquired image;
the processing unit 111 is further configured to determine a cliff area in the front ground area based on the depth image information.
In this embodiment, when the image capturing unit 120 includes the ToF image sensor 121, the ToF image sensor 121 may capture a depth image of a front ground area of the robot body 110.
When the image capturing unit 120 includes 1 RGB sensor 122, the RGB images of the plurality of front ground areas captured by the RGB sensors may be processed according to a monocular reconstruction algorithm to obtain a depth image. For example, in one embodiment, the camera (i.e., RGB sensor 122) position is moved, the image feature points before and after the movement are matched and the pixel offsets formed from the disparity are used to estimate the depth of the feature points, a sparse spatial point cloud is constructed, and the depth information of the current image is estimated from the coplanar spatial point cloud. In another embodiment, a convolutional neural network is trained using depth image data with a time sequence using a depth learning technique, and then successive images acquired by a monocular camera are input into the network to estimate depth information and brightness information of the images. The above methods for converting a color image into an image containing depth information and brightness information are all the prior art, and the detailed description is omitted in this application. It is to be understood that other existing methods may be used to implement the conversion from the color image to the depth image, and are not limited herein.
When the image capturing unit 120 includes two RGB sensors 122, the RGB images of the front ground area captured by the two RGB sensors may be processed according to a binocular reconstruction algorithm to obtain a depth image. For example, in one embodiment, cameras at different positions capture images, and the depth of pixels in the images is estimated using the pixel shift due to parallax and the known camera position, thereby obtaining an image containing depth information and brightness information. In another embodiment, a depth learning technique is used to train a convolutional neural network with time-series depth image data, and then the continuous images acquired by the binocular camera are input into the network to estimate the depth information and brightness information of the images. The above methods for converting a color image into an image containing depth information and brightness information are all the prior art, and the detailed description is omitted in this application. It is to be understood that other existing methods may be used to implement the conversion from the color image to the depth image, and are not limited herein.
When the image acquisition unit 120 includes the structured light image sensor 123, the structured light image sensor 123 may generate a structured light image having depth information.
In this embodiment, when the acquired image is a depth image, the depth image information is depth values of all or part of the pixel points in the depth image. When the acquired image is a non-depth image (e.g., an RGB image), the depth image information is depth values of all or part of pixel points in the depth image after the acquired image is converted into the depth image. The vertical distance between the front ground and the image capturing unit 120 can be determined according to the depth values of all or part of the pixel points, and thus the cliff region in the front ground can be identified.
As can be seen from the above, both the ToF image sensor 121 and the structured light image sensor 123 can directly acquire an image containing depth information and brightness information of the ground in front, and the subsequent processing unit 111 directly extracts the depth information and the brightness information from the images acquired by the two sensors, specifically, the processing unit 111 can directly read the depth information and the brightness information from a stored data format, for example, a two-dimensional array stores an image containing depth and brightness, which is a prior art, and this extraction technology is not described in detail in this application. The RGB sensor acquires an RGB image (i.e., a color image) of the front ground, and the processing unit 111 needs to convert the color image into an image containing depth information and luminance information, and then directly read the depth information and luminance information from a data format storing the converted image.
In an alternative embodiment, when the processing unit 111 determines that the vertical distance from the detection area to the image capturing unit 120 in the front ground area is within a preset vertical distance interval according to the depth image information, and the size of the detection area satisfies a predetermined size condition, the processing unit 111 determines that the detection area is a cliff area. In this case, since the precisions of the sensor are relatively high, the cliff region and the non-cliff region correspond to different depth value ranges, and thus it is possible to determine whether or not the detection region is the cliff region based on the depth values of the pixels in the image corresponding to the detection region.
In another alternative embodiment, the processing unit 111 is further configured to determine a cliff area in the front ground area based on the depth image information and the brightness information of the captured image.
In another alternative embodiment, when the processing unit 111 determines that the vertical distance from the detection area to the image capturing unit 120 in the front ground area is greater than the preset distance according to the depth image information, and determines that the brightness of the image corresponding to the detection area is greater than or equal to the preset brightness value according to the brightness information of the captured image, where the size of the detection area satisfies the predetermined size condition, the processing unit 111 determines that the detection area is the cliff area. The image corresponding to the detection area is also referred to as a detection area image.
In another alternative embodiment, when the processing unit 111 determines that the vertical distance from the detection area to the image capturing unit 120 in the front ground area is greater than the preset distance according to the depth image information, determines that the brightness of the image corresponding to the detection area is less than the preset brightness value according to the brightness information of the captured image, and determines that the infrared light intensity of the detection area detected by the infrared sensor is less than or equal to the first preset infrared light intensity, where the size of the detection area satisfies the predetermined size condition, the processing unit 111 determines that the detection area is the cliff area.
On the basis of the above optional embodiment, in another optional embodiment, the processing unit 111 is specifically configured to determine that the size of the detection area satisfies the predetermined size condition when detecting that the detection area can accommodate the projection of the driving wheel on the working surface. And when the detection area is detected to be incapable of accommodating the projection of the driving wheel on the working surface, determining that the size of the detection area does not meet the preset size condition.
In another optional embodiment, when the processing unit 111 determines that the brightness of the image corresponding to the detection area is greater than or equal to the preset brightness value according to the brightness information of the acquired image, and the infrared light intensity of the detection area detected by the infrared sensor is less than or equal to the second preset infrared light intensity, the processing unit 111 determines that the detection area is a cliff area, where the first preset infrared light intensity is less than the second preset infrared light intensity.
In another embodiment of the mobile robot provided by the present application,
an image collecting unit 120 for collecting an image of a ground area in front of the robot body 110;
the processing unit 111 is configured to extract depth image information and brightness information of the acquired image;
a processing unit 111 for determining a cliff region in the front ground region based on the extracted depth image information and luminance information;
the processing unit 111 is also used for controlling the motion behavior of the mobile robot according to the positional relationship between the robot body 110 and the cliff area.
In this embodiment, the possibility that the mobile robot falls onto the cliff can be reduced by determining the cliff area in the front ground area based on the depth image information of the available images and then controlling the motion behavior of the mobile robot.
In a further alternative embodiment of the method,
the processing unit 111 is specifically configured to control the mobile robot to stop moving, avoid the cliff, or move along a motion path of the boundary of the cliff area when detecting that the horizontal distance between the robot body 110 and the cliff area is smaller than or equal to a first preset horizontal distance.
In a further alternative embodiment of the method,
the processing unit 111 is specifically configured to control the mobile robot to perform deceleration motion when it is detected that the horizontal distance between the robot body 110 and the cliff area is smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance.
In a further alternative embodiment of the method,
the processing unit 111 is specifically configured to generate a plurality of motion paths, score each of the plurality of motion paths based on the position information of the cliff area, and control the mobile robot to move along the motion path with the highest score.
In another alternative embodiment, the minimum horizontal distance of each motion path to the cliff area in the plurality of motion paths is inversely proportional to the fraction of the motion path.
In a further alternative embodiment of the method,
specifically, when the motion path of the boundary of the cliff area is not acquired, the processing unit 111 acquires the historical path from the memory of the mobile robot and controls the mobile robot to travel along the historical path.
Referring to fig. 3, the present application provides one embodiment of a cliff detection method, which may be performed by the mobile robot 100, comprising:
step 301, collecting an image of a ground area in front of the robot body.
In this embodiment, the front ground area refers to a ground area to which the mobile robot is about to reach along the current motion path.
And step 302, extracting depth image information of the acquired image.
In one or more embodiments, when the acquired image is a depth image, the depth image information is depth values of all or part of pixel points in the depth image. When the acquired image is a non-depth image (e.g., an RGB image), the depth image information is depth values of all or part of pixel points in the depth image after the acquired image is converted into the depth image. The detection area may be a front ground area acquired by the image acquisition unit 120, or a partial area of the front ground area acquired by the image acquisition unit 120.
Optionally, a target image area is preset in the mobile robot, and a ground area corresponding to the target image area is determined as a detection area according to the camera imaging coordinate system. Fig. 4 is a schematic view of a mobile robot and a detection area. In the coordinate system shown in fig. 4, the coordinates of the horizontal axis and the vertical axis are both centimeters. The set of ground reference points in the detection area 41 is denoted as c (pr). C (pr) may be a lattice, and the distribution of the ground reference points in the lattice may be, but is not limited to, uniform distribution. For example, in a square lattice, any two adjacent ground reference points are spaced apart by 5 cm. C (Pr) may also be a grid pattern of ground reference points.
Optionally, the collected image is binarized according to a comparison result between the depth value of the pixel point in the depth image and a preset depth value. For example, the gray value of the pixel point with the depth value greater than or equal to the preset depth value is set to be 0, and the gray value of the pixel point with the depth value less than the preset depth value is set to be 255; and then marking the pixel points with the gray value of 0 as a connected region, and taking the connected region meeting the condition as a detection region. Connected regions that satisfy the conditions include, but are not limited to: the communication area can accommodate the projection of the driving wheel on the working surface. Wherein the work surface is used to indicate the surface on which the mobile robot 100 is currently working. Specifically, the connected region satisfying the condition includes, but is not limited to, a region having an area larger than a preset area, which can reduce erroneous judgment of the cliff detection by a noise point having a depth value larger than or equal to the preset depth value in the front region. In the present embodiment, in contrast to the prior art, the image acquisition unit 120 in the present embodiment acquires an image of a front ground area, and determines whether or not a detection area is a cliff area by targeting one or more detection areas in the front ground area, so that when the detection area is determined to be a cliff area, the boundary of the cliff area can be directly acquired, and thus the mobile robot 100 can plan a path in advance from the boundary of the cliff area after acquiring the boundary of the cliff area.
In the depth image, the depth value of the pixel point is used for representing the distance. The larger the depth value, the farther away, the smaller the depth value, the closer the distance. Optionally, the depth value is proportional to the distance size. It is understood that the corresponding relationship between the depth value and the distance size may also be non-linear, and the specific values of the depth value and the distance size may be set according to practical situations, and are not limited herein.
When the acquired image is a depth image, an image area corresponding to the detection area in the depth image is determined, and depth image information in the image area can be extracted. When the acquired image is an RGB image, the image may be converted into a depth image, and then depth image information in the depth image may be extracted.
Step 303 determines a cliff area in the front ground area based on the depth image information.
In one or more embodiments, a vertical distance of the image acquisition unit 120 from the detection area may be determined according to the photographing angle and the correspondence of the depth value and the distance, and whether the detection area is the cliff area may be determined by comparing the vertical distance with a vertical distance from the image acquisition unit 120 to the work surface. When the vertical distance between the image acquisition unit 120 and the detection area is greater than the vertical distance from the image acquisition unit 120 to the working surface, it indicates that the detection area is lower than the working surface, and the detection area is determined to be a cliff area. When the vertical distance between the image acquisition unit 120 and the detection area is less than or equal to the vertical distance from the image acquisition unit 120 to the working surface, the detection area is not lower than the working surface, and the detection area is determined to be a non-cliff area. The shooting angle may be an angle between the lens of the image capturing unit 120 and the working surface, or an angle between the lens of the image capturing unit 120 and the vertical surface.
Alternatively, the vertical distance between the image capturing unit 120 and the detection area is determined according to the measurement angle and the corresponding relationship between the depth value and the distance. The measurement angle is an angle between a light (or ultrasonic) emitted from the distance measuring sensor and the working surface in the image capturing unit 120, or an angle between a light (or ultrasonic) emitted from the distance measuring sensor and the vertical surface.
Alternatively, the depth value corresponding to the working surface is determined according to the corresponding relationship between the distance and the depth value and the vertical distance from the image capturing unit 120 to the working surface. Optionally, when the depth values of all pixel points in the image of the detection area are greater than the depth values corresponding to the working surface, determining that the detection area is a cliff area; and when the depth values of all pixel points in the image of the detection area are less than or equal to the depth values corresponding to the working surface, determining that the detection area is a non-cliff area. Optionally, when the depth values of the plurality of pixel points in the image of the detection region are all greater than the depth values corresponding to the working surface and the proportion of the plurality of pixel points occupying all the pixel points of the image of the detection region exceeds a preset proportion, determining that the detection region is the cliff region. The preset ratio may be, but is not limited to, any one of [0.8, 1).
In an alternative embodiment, when it is determined that the vertical distance from the detection area to the image acquisition unit is within a preset vertical distance interval in the front ground area according to the depth image information, and the size of the detection area meets a predetermined size condition, the detection area is determined to be the cliff area.
In one or more embodiments, the various sensors in the image capturing unit have corresponding depth measuring ranges, and when the depth measured by the image capturing unit for the object is within the depth measuring range, the measured depth can accurately reflect the distance from the image capturing unit to the object.
In an alternative embodiment, the vertical distance from the image capturing unit 120 to the working surface is the lower limit of the preset vertical distance interval. Or, in some detection areas lower than the height of the working surface, the mobile robot can pass through the detection area without falling down, so that the lower limit of the preset vertical distance interval can be greater than the vertical distance from the image capturing unit 120 to the working surface, but cannot be greater than the distance that the mobile robot cannot pass through the detection area (i.e., the falling distance).
In an alternative embodiment, the image capturing unit 120 may determine the upper limit of the preset vertical distance interval according to the upper limit of the depth measurement range. When the vertical distance from the detection area to the image capturing unit 120 is within the preset vertical distance interval, it indicates that the detection area may be a cliff area. When the detection area is lower than the working surface and the size of the detection area satisfies a predetermined size condition, it indicates that the mobile robot falls into the detection area when reaching the detection area, and thus the detection area is determined to be a cliff area.
In the ground area in front of the operation of the mobile robot 100, there may be low-reflectivity objects, such as dark carpet, which absorb light more, and thus the vertical distance detected by the mobile robot 100 according to the depth information of the detection area may also be greater than or equal to the preset distance, so that the mobile robot 100 may misjudge the area where the dark carpet is laid as the cliff area. Therefore, in another alternative embodiment of the present application, the cliff area in the front ground area is determined based on the depth image information and the brightness information of the captured image.
Further, in the present application, when it is determined that the vertical distance from the detection area to the image capturing unit is greater than the preset vertical distance in the ground area ahead according to the depth image information, the mobile robot 100 further compares the brightness of the image corresponding to the extracted detection area with the preset brightness value, and determines that the detection area is the cliff area when the brightness of the image corresponding to the detection area is greater than or equal to the preset brightness value and the size of the detection area satisfies a predetermined size condition. Specifically, when it is determined that the vertical distance from the detection area to the image acquisition unit is greater than or equal to a preset distance in the front ground area according to the depth image information, and it is determined that the brightness of the image corresponding to the detection area is greater than or equal to a preset brightness value according to the brightness information of the acquired image, wherein the size of the detection area satisfies a predetermined size condition, the detection area is determined to be the cliff area. Under the same illumination condition, the lower the reflectivity of the object is, the lower the brightness of the object is; the higher the object reflectivity, the greater the brightness of the object. By combining the vertical distance from the detection area to the image capturing unit with the image brightness of the detection area, even if an object (for example, a dark carpet) with a lower reflectance exists in the front area of the mobile robot 100, it is possible to reduce the possibility that the mobile robot 100 erroneously judges an object with a lower reflectance as a cliff. In the prior art, when the infrared light intensity of a detection area detected by an infrared sensor is greater than the preset infrared light intensity, the detection area is determined to be a non-cliff area; when the infrared light intensity of the detection area detected by the infrared sensor is smaller than a preset infrared light intensity (i.e., a preset infrared light intensity in the prior art), the detection area is determined to be the cliff area. However, when the mobile robot encounters a substance having a reflectance similar to that of the cliff determination, that is, an object having a low reflectance (e.g., a black carpet, a wine red carpet, etc.), the mobile robot may erroneously determine that the infrared light intensity received by the mobile robot is lower than the preset infrared light intensity, and recognize the substance having a similar cliff determination (e.g., the black carpet, the wine red carpet, etc.) as the cliff. Therefore, compared with the prior art, in the embodiment, when it is determined that the vertical distance from the detection area to the image acquisition unit in the front ground area is greater than the preset vertical distance according to the depth image information, the brightness of the image corresponding to the extracted detection area is further compared with the preset brightness value, so that the situation that the cliff is judged by mistake due to too low reflectivity can be reduced, and the accuracy of detecting the cliff is improved.
When the mobile robot 100 determines that the vertical distance from the detection area to the image capturing unit in the front ground area is greater than the preset vertical distance based on the depth image information, the brightness of the image corresponding to the extracted detection area is further compared with the preset brightness value, and if the brightness of the image corresponding to the detection area is smaller than the preset brightness value, it indicates that the front area may be a cliff area (for example, a cliff area with a low-reflection carpet laid thereon) or a non-cliff area (for example, an object with low reflectivity may be available). Thus, in another alternative embodiment, step 303 comprises:
when the vertical distance from the detection area to the image acquisition unit in the front ground area is determined to be greater than or equal to the preset distance according to the depth image information, the brightness of the image corresponding to the detection area is determined to be smaller than the preset brightness value according to the brightness information of the acquired image, and the detected infrared light intensity of the detection area is smaller than or equal to the first preset infrared light intensity, wherein the size of the detection area meets the preset size condition, the detection area is determined to be the cliff area.
In this embodiment, since it is determined that there is a low-reflectivity object in front of the detection area by combining the depth image information of the detection area and the image brightness information of the detection area, in order to reduce false determination of the cliff, the first preset infrared light intensity value cannot be too large, and if the value is large, the detection area is determined as the cliff area by mistake.
When the vertical distance from the detection area to the image acquisition unit is greater than or equal to the preset distance and the brightness of the image corresponding to the detection area is less than the preset brightness value, it indicates that the reflectivity of the detection area is low and the detection area may be a cliff area. Further, when the infrared light intensity of the detection area is smaller than or equal to the first preset infrared light intensity, the detection area is determined to be a cliff area, and otherwise, the detection area is determined to be a non-cliff area. According to the preset infrared light intensity arranged in the intensity type infrared sensor in the prior art, objects with low reflectivity and cliff areas with low reflectivity are difficult to distinguish.
Embodiments of the present application detect a region, so that it is possible to reduce false determination by determining whether the size of the region can be a cliff region. In the running environment of the robot, although some detection areas have a certain depth, for example, a small hole is arranged in the front area on the working face of the robot, the robot cannot be sunk into the small hole, the robot can normally pass through the small hole, when the cliff is detected by using the prior art, the area is judged as the cliff, and the misjudgment is increased. If the robot is a cleaning robot, the cleaning robot will miss the cleaning. In the above embodiments, detecting a cliff in combination with the size of the detection area can further reduce false positives. In the above embodiment, if the detection area does not satisfy the predetermined size condition, it indicates that the mobile robot 100 can pass through the detection area, and the mobile robot 100 does not get stuck therein.
On the basis of the above embodiment, in an alternative embodiment, when the detection area is detected to be capable of accommodating the projection of the driving wheel on the working surface, the size of the detection area is determined to meet the predetermined size condition. When it is detected that the detection area can accommodate the projection of the driving wheel on the working surface, it indicates that the driving wheel can be sunk therein, so that the mobile robot may not be able to move continuously. When it is detected that the detection area, which is not the cliff area, cannot accommodate the projection of the driving wheel on the working surface, it indicates that the driving wheel is not sunk therein.
Optionally, the contour shape of the detection area and the contour shape of the projection of the driving wheel are compared, and if the contour shape of the detection area can include the contour shape of the projection of the driving wheel, the projection of the driving wheel on the working surface can be accommodated in the detection area. Optionally, the area of the detection region is determined according to a coordinate system in the camera imaging model, the area of the detection region is compared with a preset projection area of the driving wheel on the working surface, and if the area of the detection region is larger, it is determined that the detection region can accommodate the projection of the driving wheel on the working surface. Otherwise, the detection area cannot accommodate the projection of the driving wheel on the working surface.
In an alternative embodiment, when the area of the detection region is detected to be larger than the preset area threshold, the size of the detection region is determined to satisfy the predetermined size condition. The preset area threshold may be set according to an area of the robot body 110.
In the prior art, when the infrared light intensity of a detection area detected by an infrared sensor is greater than a preset infrared light intensity, the detection area is determined to be a non-cliff area; and when the infrared light intensity of the detection area detected by the infrared sensor is smaller than the preset infrared light intensity, determining that the detection area is the cliff area. However, when the mobile robot encounters a substance having a light reflection similar to the cliff determination (e.g., a black carpet, a wine red carpet, etc.), the mobile robot may erroneously determine that the cliff determination is similar to the cliff determination because the intensity of infrared light received by the mobile robot is lower than the predetermined intensity of infrared light.
In another optional embodiment, the method further comprises:
and when the brightness of the image corresponding to the detection area is determined to be greater than or equal to a preset brightness value according to the acquired image, and the infrared light intensity of the detection area detected by the infrared sensor is less than or equal to a second preset infrared light intensity, determining the detection area as a cliff area, wherein the first preset infrared light intensity is less than the second preset infrared light intensity.
In one or more embodiments, the second predetermined intensity of infrared light may be an intensity of infrared light used when measuring a highly reflective object or a highly reflective cliff, which is greater than or equal to the predetermined intensity of infrared light of the prior art.
In the application, in the normal mode, whether the detection area is the cliff area is judged by adopting the second preset infrared light intensity. When the infrared light intensity of the detection area detected by the infrared sensor is greater than a second preset infrared light intensity, determining that the detection area is a non-cliff area; and when the infrared light intensity of the detection area detected by the infrared sensor is less than the second preset infrared light intensity, determining that the detection area is the cliff area.
When the brightness of the detected area image is smaller than a preset brightness value, starting an infrared sensor to compare the infrared light intensity of the detected area with a first preset infrared light intensity, and when the infrared light intensity of the detected area detected by the infrared sensor is larger than the first preset infrared light intensity, determining that the detected area is a non-cliff area; and when the infrared light intensity of the detection area detected by the infrared sensor is smaller than the first preset infrared light intensity, determining that the detection area is the cliff area. When an object with low reflectivity is encountered, the cliff area and the non-cliff area are distinguished by adopting low first preset infrared light intensity, so that the situation of false cliff judgment can be reduced, and the accuracy of cliff identification can be improved.
In the above-described embodiment of the present application, an image of the front ground area of the robot main body is acquired, depth image information and luminance information of the acquired image are extracted, and the cliff area in the front ground area is specified based on the extracted depth image information and luminance information. In contrast to the prior art, in the present embodiment, the image acquisition unit 120 acquires an image of a front ground area, and determines whether or not a detection area is a cliff area by using one or more detection areas in the front ground area as objects, so that when the detection area is determined to be a cliff area, the boundary of the cliff area can be directly acquired, and thus, after the subsequent mobile robot 100 acquires the boundary of the cliff area, a path can be planned in advance based on the boundary of the cliff area.
Referring to fig. 5, the present application provides an embodiment of a method for controlling a mobile robot, which may be performed by the mobile robot 100, including:
step 501, collecting an image of a ground area in front of a robot body.
And 502, extracting depth image information and brightness information of the acquired image.
Step 503, determining a cliff area in the front ground area based on the extracted depth image information and the extracted brightness information.
And step 504, controlling the motion behavior of the mobile robot according to the position relation between the robot body and the cliff area.
Steps 501 to 503 are similar to steps 301 to 303 in the embodiment shown in fig. 3.
In this embodiment, the possibility that the mobile robot falls onto the cliff can be reduced by determining the cliff area in the front ground area based on the depth image information of the available images and then controlling the motion behavior of the mobile robot.
In an alternative embodiment of the method of the invention,
step 504 includes: and when the horizontal distance between the robot body and the cliff area is detected to be smaller than or equal to a first preset horizontal distance, controlling the mobile robot to stop moving or avoid the cliff or move along a motion path of the boundary of the cliff area.
In this embodiment, when the horizontal distance between the robot body and the cliff area is less than or equal to the first preset horizontal distance, it indicates that the mobile robot is very close to the cliff area, and subsequent operations are required to prevent the mobile robot from falling into the cliff area. In particular, the mobile robot may be controlled to stop moving, or to move along a movement path that avoids the cliff area. Optionally, the mobile robot is controlled to move along a motion path of a boundary of the cliff area. By setting the horizontal distance, the inertia of the mobile robot can be reduced, thereby reducing the risk of falling into the cliff area due to insufficient deceleration.
The first preset horizontal distance is a buffer distance set to overcome the inertial motion of the mobile robot. When the body of the mobile robot is circular in shape, the first preset horizontal distance may be, but is not limited to, any one of [1.2R, 1.5R ], where R is the body radius.
In an alternative embodiment, the first preset horizontal distance is the distance from the center of the mobile robot to the center of the cliff area.
There may be a time delay in detecting whether a cliff exists in a front area using an image while the mobile robot 100 is moving, so that there is a time to react to prevent the mobile robot 100 from falling after detecting the cliff. In another alternative embodiment, step 504 includes: and when the horizontal distance between the robot body and the cliff area is detected to be smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance, controlling the mobile robot to perform deceleration motion. Thus, the mobile robot 100 has sufficient time to adjust the current movement speed, and can smoothly avoid the cliff area.
In this embodiment, the second preset horizontal distance is another buffer distance set to overcome the inertial motion of the mobile robot, and the second preset horizontal distance is greater than the first preset horizontal distance.
And when the horizontal distance between the robot body and the cliff area is smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance, the fact that the robot body is close to the cliff area is indicated, and the mobile robot is controlled to perform deceleration motion. When the robot body is circular in shape, the second preset horizontal distance may be, but is not limited to, any one of [1.2R, 1.5R ], where R is the radius of the body. For example, when the second predetermined horizontal distance is 1.4R, the first predetermined horizontal distance is 1.1R
In an alternative embodiment, the second preset horizontal distance is the distance from the center of the mobile robot to the center of the cliff area.
In the prior art, the buffer distance set by the mobile robot is often small, and the mobile robot is easy to fall into a cliff area because of insufficient deceleration. By setting the second preset horizontal distance and the first preset horizontal distance, the inertia of the mobile robot can be reduced, thereby reducing the risk of falling into the cliff area due to insufficient deceleration.
For ease of understanding, the distance from the robot body 110 to the detection area will be described below. The mobile robot 100 measures a distance by a distance measuring sensor in the image pickup unit 120. The ranging sensor may be, but is not limited to, a ToF image sensor, an infrared ranging sensor, an ultrasonic ranging sensor, or a laser ranging sensor.
Referring to fig. 6, in one embodiment, the distance measuring sensor may measure the distance from the image capturing unit 120 to P0 (i.e., D1) with respect to a ground reference point P0 in the detection area.
The horizontal distance from the image capturing unit 120 to P0 (i.e., D2) can be determined from the angle θ between the ranging sensor and the working surface and D1, D2= D1 × cos θ.
The perpendicular distance from the image capturing unit 120 to P0 (i.e., D3) can be determined from the angle θ between the ranging sensor and the working surface and D1, D3= D1 x sin θ.
In addition, the vertical distance of the image capturing unit 120 from the work surface (i.e., D4) may be measured in advance and stored in the memory of the mobile robot.
When D3 is less than D4, P0 is indicated below the working surface.
Alternatively, the ground reference point P1 closest to the image capturing unit 120 in the examination area is determined, the horizontal distance from the image capturing unit 120 to P1 is taken as the horizontal distance from the image capturing unit 120 to the examination area, and the vertical distance from the image capturing unit 120 to P1 is taken as the vertical distance from the image capturing unit 120 to the examination area.
Optionally, the horizontal distance from the image acquisition unit 120 to the central point of the detection area is selected as the horizontal distance from the image acquisition unit 120 to the detection area, and the vertical distance from the image acquisition unit 120 to the central point of the detection area is selected as the vertical distance from the image acquisition unit 120 to the detection area.
It is understood that the distance from other ground reference points in the detection area to the image capturing unit 120 may also be selected as the distance from the image capturing unit 120 to the detection area, and the application is not limited thereto. Likewise, the distance of the image pickup unit 120 to the cliff area may be acquired.
The method for detecting the cliff in the prior art cannot accurately acquire the boundary of the cliff, for example, the method for detecting the cliff by using the infrared sensor detects the area below the robot by using the infrared sensor installed at the bottom of the mobile robot, and since the area is limited, the detection by using the infrared sensor may only judge that the cliff exists in the area, and cannot accurately know the boundary of the cliff area, so that the mobile robot 100 cannot plan the path in advance. In the method for detecting a cliff in the above embodiment, the image acquisition unit 120 in this embodiment acquires an image of a front ground area, and determines whether the detection area is a cliff area or not by taking one or more detection areas in the front ground area as objects, so that when the detection area is determined to be a cliff area, the boundary of the cliff area can be directly acquired, and thus, after the mobile robot 100 acquires the boundary of the cliff area, a path can be planned in advance according to the boundary of the cliff area, so that the mobile robot 100 can select a path which is safe and close to the cliff, and the mobile robot 100 can smoothly pass through the cliff area.
In another alternative embodiment, step 504 includes: and generating a plurality of motion paths, scoring according to the position information of each motion path and the cliff area in the plurality of motion paths, and controlling the mobile robot to move along the motion path with the highest score.
Preferably, when the horizontal distance between the robot body and the cliff area is smaller than or equal to a second preset horizontal distance and larger than a first preset horizontal distance, the fact that the robot body is close to the cliff area is indicated, the mobile robot is controlled to perform deceleration motion, a plurality of motion paths are generated, scoring is performed according to position information of each motion path in the plurality of motion paths and the cliff area, and the mobile robot is controlled to move along the motion path with the highest score. Thus, when the mobile robot 100 is further away from the cliff area by a buffer distance, it is possible to select a safe path close to the cliff, and to smoothly pass the mobile robot 100 through the cliff area. When the mobile robot 100 is a cleaning robot, the robot can be prevented from falling off the cliff and the missing sweeping can be reduced.
In this embodiment, a plurality of travel paths may be generated according to the motion model and the travel speed of the mobile robot. The method of generating the motion path may be a Dynamic Window Approach (DWA) based on a Robot Operating System (ROS). The closer the minimum horizontal distance from the motion path to the cliff area is, the higher the score of the motion path is; the further the minimum horizontal distance of the motion path to the cliff area, the lower the score of the motion path. The path of motion along the border of the cliff area has the highest score.
Or scoring according to an included angle between the traveling direction of the mobile robot at the position closest to the cliff area and the traveling direction at the current position, wherein the smaller the included angle, the higher the score is; the larger the angle, the lower the fraction. Or, the score is calculated according to the difference between the moving speed of the mobile robot at the position closest to the cliff area and the moving speed at the current position, and the smaller the difference is, the higher the score is; the larger the difference, the lower the score. And after scoring is finished, taking the running path of the prediction region with the highest score as a target path. When the motion path is calculated based on the minimum horizontal distance from the motion path to the cliff area, selecting the motion path with the highest score may allow the mobile robot to move safely and close to the cliff, enabling the robot to clean more areas and thus better clean the floor when the mobile robot 100 is a cleaning robot.
In another alternative embodiment, the minimum horizontal distance of each motion path to the cliff area in the plurality of motion paths is inversely proportional to the fraction of the motion path. Thereby providing a way to quickly calculate a motion path score.
In another alternative embodiment, step 504 includes: when the motion path avoiding the cliff area is not acquired, the historical path is acquired from the memory of the mobile robot, and the mobile robot is controlled to travel along the historical path.
When a motion path avoiding the cliff area is not acquired, it means that the ground in front is not accessible and the direction of motion needs to be changed. At this time, the history path is acquired from the memory of the mobile robot, the mobile robot is controlled to return along the history path, and a new motion path is newly generated, so that the motion can be continued while avoiding the cliff area.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a device (which may be a personal computer, a server, or a network device, a robot, a single chip, a chip, etc.) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (19)

1. A mobile robot, comprising: the robot comprises a robot body, an image acquisition unit and a processing unit;
the image acquisition unit is used for acquiring an image of a ground area in front of the robot body;
the processing unit is used for extracting depth image information and brightness information of the acquired image;
the processing unit is further configured to determine a cliff area in the front ground area based on the depth image information and the brightness information;
when the processing unit determines that the vertical distance from a detection area to the image acquisition unit is greater than a preset vertical distance in the front ground area according to the depth image information, determines that the brightness of an image corresponding to the detection area is greater than or equal to a preset brightness value according to the brightness information of the acquired image, and determines that the size of the detection area meets a preset size condition, the processing unit determines that the detection area is a cliff area.
2. The mobile robot of claim 1, further comprising an infrared sensor, wherein when the processing unit determines that a vertical distance from a detection area to the image capturing unit in the front ground area is greater than a preset vertical distance according to the depth image information, determines that a brightness of an image corresponding to the detection area is less than a preset brightness value according to brightness information of the captured image, and determines that an infrared light intensity of the detection area detected by the infrared sensor is less than or equal to a first preset infrared light intensity, wherein a size of the detection area satisfies a predetermined size condition, the processing unit determines that the detection area is a cliff area.
3. The mobile robot according to claim 1 or 2, wherein the processing unit is configured to determine that the size of the detection area satisfies a predetermined size condition when detecting that the detection area can accommodate a projection of a driving wheel on a working surface; or
The processing unit is specifically configured to determine that the size of the detection region meets a predetermined size condition when it is detected that the area of the detection region is larger than a preset area threshold.
4. The mobile robot of claim 2, wherein when the processing unit determines that the brightness of the image corresponding to the detection area is greater than or equal to the preset brightness value according to the brightness information of the acquired image, and the infrared light intensity of the detection area detected by the infrared sensor is less than or equal to a second preset infrared light intensity, the processing unit determines that the detection area is a cliff area, wherein the first preset infrared light intensity is less than the second preset infrared light intensity.
5. A mobile robot, comprising: the robot comprises a robot body, an image acquisition unit and a processing unit;
the image acquisition unit is used for acquiring an image of a ground area in front of the robot body;
the processing unit is used for extracting depth image information and brightness information of the acquired image;
a processing unit further configured to determine a cliff region in the front ground region based on the extracted depth image information and the brightness information;
the processing unit is further used for controlling the motion behavior of the mobile robot according to the position relation between the robot body and the cliff area;
the processing unit is specifically configured to control the mobile robot to stop moving, avoid the cliff, or move along a movement path of a boundary of the cliff area when detecting that the distance between the robot body and the cliff area is smaller than or equal to a first preset horizontal distance.
6. The mobile robot of claim 5,
the processing unit is specifically configured to control the mobile robot to perform deceleration motion when it is detected that the distance between the robot body and the cliff area is smaller than or equal to a second preset horizontal distance and larger than the first preset horizontal distance.
7. The mobile robot of claim 5,
the processing unit is specifically configured to generate a plurality of motion paths, perform scoring based on position information of each of the plurality of motion paths and the cliff area, and control the mobile robot to move along the motion path with the highest score.
8. The mobile robot of claim 7, wherein a minimum distance of each motion path in the plurality of motion paths to the cliff area is inversely proportional to a fraction of the motion path.
9. Mobile robot as claimed in any of the claims 5 to 8,
and the processing unit is specifically configured to, when a motion path avoiding the cliff area is not acquired, acquire a historical path from a memory of the mobile robot, and control the mobile robot to travel along the historical path.
10. A cliff detection method, comprising:
collecting an image of a ground area in front of a robot body;
extracting depth image information and brightness information of the acquired image;
determining a cliff area in the front ground area based on the depth image information and the brightness information;
wherein the determining a cliff region in the front ground region based on the depth image information and the brightness information comprises:
and when the vertical distance from the detection area to the image acquisition unit in the front ground area is determined to be greater than the preset vertical distance according to the depth image information, the brightness of the image corresponding to the detection area is determined to be greater than or equal to the preset brightness value according to the brightness information of the acquired image, and the size of the detection area meets the preset size condition, determining that the detection area is the cliff area.
11. The method of claim 10, further comprising:
when it is determined that the vertical distance from a detection area to an image acquisition unit in the front ground area is greater than a preset vertical distance according to the depth image information, the brightness of an image corresponding to the detection area is determined to be less than a preset brightness value according to the brightness information of the acquired image, and the detected infrared light intensity of the detection area is less than or equal to a first preset infrared light intensity, wherein the size of the detection area meets a preset size condition, the detection area is determined to be a cliff area.
12. The method according to claim 10 or 11, wherein when the detection area is detected to be capable of accommodating the projection of the driving wheel on the working surface, the size of the detection area is determined to meet a predetermined size condition; or
And when the area of the detection area is detected to be larger than a preset area threshold value, determining that the size of the detection area meets a preset size condition.
13. The method of claim 11, further comprising:
and when the brightness of the image corresponding to the detection area is determined to be greater than or equal to the preset brightness value according to the brightness information of the acquired image, and the infrared light intensity of the detection area is less than or equal to a second preset infrared light intensity, determining that the detection area is a cliff area, wherein the first preset infrared light intensity is less than the second preset infrared light intensity.
14. A method for controlling a mobile robot, comprising:
collecting an image of a ground area in front of a robot body, and extracting depth image information and brightness information of the collected image;
determining a cliff area in the front ground area based on the extracted depth image information and the brightness information;
controlling the motion behavior of the mobile robot according to the position relation between the robot body and the cliff area;
wherein the controlling the motion behavior of the mobile robot according to the position relationship between the robot body and the cliff area comprises:
and when detecting that the horizontal distance between the robot body and the cliff area is smaller than or equal to a first preset horizontal distance, controlling the mobile robot to stop moving, avoid the cliff area or move along a motion path of the boundary of the cliff area.
15. The method of claim 14, further comprising:
and when the horizontal distance between the robot body and the cliff area is detected to be smaller than or equal to a second preset horizontal distance and larger than the first preset horizontal distance, controlling the mobile robot to perform deceleration motion.
16. The method of claim 14, further comprising:
generating a plurality of motion paths, scoring the positions of each motion path and the cliff area, and controlling the mobile robot to move along the motion path with the highest score.
17. The method of claim 16, wherein a minimum horizontal distance of each motion path to the cliff area in the plurality of motion paths is inversely proportional to a fraction of the motion path.
18. The method according to any one of claims 14 to 17, further comprising:
and when the motion path avoiding the cliff area is not acquired, acquiring a historical path from a memory of the mobile robot, and controlling the mobile robot to walk along the historical path.
19. A computer readable storage medium comprising program instructions for performing the method of any of claims 10 to 13 and/or 14 to 18 when invoked.
CN202010035730.0A 2020-01-14 2020-01-14 Cliff detection method, mobile robot control method, and mobile robot Active CN110852312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010035730.0A CN110852312B (en) 2020-01-14 2020-01-14 Cliff detection method, mobile robot control method, and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010035730.0A CN110852312B (en) 2020-01-14 2020-01-14 Cliff detection method, mobile robot control method, and mobile robot

Publications (2)

Publication Number Publication Date
CN110852312A true CN110852312A (en) 2020-02-28
CN110852312B CN110852312B (en) 2020-07-17

Family

ID=69610684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010035730.0A Active CN110852312B (en) 2020-01-14 2020-01-14 Cliff detection method, mobile robot control method, and mobile robot

Country Status (1)

Country Link
CN (1) CN110852312B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111631641A (en) * 2020-05-26 2020-09-08 珠海市一微半导体有限公司 Robot anti-falling detection method
CN111660294A (en) * 2020-05-18 2020-09-15 北京科技大学 Augmented reality control system of hydraulic heavy-duty mechanical arm
CN111680673A (en) * 2020-08-14 2020-09-18 北京欣奕华科技有限公司 Method, device and equipment for detecting dynamic object in grid map
CN111813103A (en) * 2020-06-08 2020-10-23 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN112561874A (en) * 2020-12-11 2021-03-26 杭州海康威视数字技术股份有限公司 Blocking object detection method and device and monitoring camera
CN112596527A (en) * 2020-12-17 2021-04-02 珠海市一微半导体有限公司 Robot jamming detection method based on slope structure, chip and cleaning robot
CN113110426A (en) * 2021-03-29 2021-07-13 深圳市优必选科技股份有限公司 Edge detection method, edge detection device, robot and storage medium
CN113112491A (en) * 2021-04-27 2021-07-13 深圳市优必选科技股份有限公司 Cliff detection method and device, robot and storage medium
CN113313052A (en) * 2021-06-15 2021-08-27 杭州萤石软件有限公司 Cliff area detection and mobile robot control method and device and mobile robot
CN113524265A (en) * 2021-08-03 2021-10-22 汤恩智能科技(常熟)有限公司 Robot anti-falling method, robot and readable storage medium
CN113712473A (en) * 2021-07-28 2021-11-30 深圳甲壳虫智能有限公司 Height calibration method and device and robot
WO2023078318A1 (en) * 2021-11-04 2023-05-11 珠海一微半导体股份有限公司 Laser point-based robot suspension determining method, map update method, and chip

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267626A1 (en) * 2013-03-15 2014-09-18 Intuitive Surgical Operations, Inc. Intelligent manual adjustment of an image control element
WO2016082414A1 (en) * 2014-11-25 2016-06-02 中兴通讯股份有限公司 Brightness compensation method and device
CN107045352A (en) * 2017-05-31 2017-08-15 珠海市微半导体有限公司 Based on how infrared robot obstacle-avoiding device, its control method and Robot side control method
CN108170134A (en) * 2017-11-15 2018-06-15 国电南瑞科技股份有限公司 A kind of robot used for intelligent substation patrol paths planning method
CN108235774A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Information processing method, device, cloud processing equipment and computer program product
CN108280807A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Monocular depth image collecting device and system and its image processing method
CN108876799A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of real-time step detection method based on binocular camera
CN110216661A (en) * 2019-04-29 2019-09-10 北京云迹科技有限公司 Fall the method and device of region recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267626A1 (en) * 2013-03-15 2014-09-18 Intuitive Surgical Operations, Inc. Intelligent manual adjustment of an image control element
WO2016082414A1 (en) * 2014-11-25 2016-06-02 中兴通讯股份有限公司 Brightness compensation method and device
CN108280807A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Monocular depth image collecting device and system and its image processing method
CN107045352A (en) * 2017-05-31 2017-08-15 珠海市微半导体有限公司 Based on how infrared robot obstacle-avoiding device, its control method and Robot side control method
CN108170134A (en) * 2017-11-15 2018-06-15 国电南瑞科技股份有限公司 A kind of robot used for intelligent substation patrol paths planning method
CN108235774A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Information processing method, device, cloud processing equipment and computer program product
CN108876799A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of real-time step detection method based on binocular camera
CN110216661A (en) * 2019-04-29 2019-09-10 北京云迹科技有限公司 Fall the method and device of region recognition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SOOKHYUN YANG ET AL: "cliff-sensor-based low-level obstacle detection for a wheeled robot in an indoor environment", 《2019 INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS》 *
陈昌宇: "基于树莓派和STM32的室外智能清洁机器人", 《电子技术与软件工程》 *
黄鹤: "机械臂循迹避障跟随智能小车", 《电脑迷》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111660294A (en) * 2020-05-18 2020-09-15 北京科技大学 Augmented reality control system of hydraulic heavy-duty mechanical arm
CN111631641A (en) * 2020-05-26 2020-09-08 珠海市一微半导体有限公司 Robot anti-falling detection method
CN111631641B (en) * 2020-05-26 2022-04-01 珠海一微半导体股份有限公司 Robot anti-falling detection method
CN111813103B (en) * 2020-06-08 2021-07-16 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN111813103A (en) * 2020-06-08 2020-10-23 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN111680673A (en) * 2020-08-14 2020-09-18 北京欣奕华科技有限公司 Method, device and equipment for detecting dynamic object in grid map
CN112561874A (en) * 2020-12-11 2021-03-26 杭州海康威视数字技术股份有限公司 Blocking object detection method and device and monitoring camera
CN112596527A (en) * 2020-12-17 2021-04-02 珠海市一微半导体有限公司 Robot jamming detection method based on slope structure, chip and cleaning robot
CN112596527B (en) * 2020-12-17 2023-10-24 珠海一微半导体股份有限公司 Robot clamping detection method based on slope structure, chip and cleaning robot
CN113110426A (en) * 2021-03-29 2021-07-13 深圳市优必选科技股份有限公司 Edge detection method, edge detection device, robot and storage medium
CN113112491B (en) * 2021-04-27 2023-12-19 深圳市优必选科技股份有限公司 Cliff detection method, cliff detection device, robot and storage medium
CN113112491A (en) * 2021-04-27 2021-07-13 深圳市优必选科技股份有限公司 Cliff detection method and device, robot and storage medium
CN113313052A (en) * 2021-06-15 2021-08-27 杭州萤石软件有限公司 Cliff area detection and mobile robot control method and device and mobile robot
CN113313052B (en) * 2021-06-15 2024-05-03 杭州萤石软件有限公司 Cliff area detection and mobile robot control method and device and mobile robot
CN113712473A (en) * 2021-07-28 2021-11-30 深圳甲壳虫智能有限公司 Height calibration method and device and robot
CN113524265A (en) * 2021-08-03 2021-10-22 汤恩智能科技(常熟)有限公司 Robot anti-falling method, robot and readable storage medium
WO2023078318A1 (en) * 2021-11-04 2023-05-11 珠海一微半导体股份有限公司 Laser point-based robot suspension determining method, map update method, and chip

Also Published As

Publication number Publication date
CN110852312B (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN110852312B (en) Cliff detection method, mobile robot control method, and mobile robot
EP3459691B1 (en) Robot vacuum cleaner
EP3104194B1 (en) Robot positioning system
CN112415998B (en) Obstacle classification obstacle avoidance control system based on TOF camera
JP6696697B2 (en) Information processing device, vehicle, information processing method, and program
US9744670B2 (en) Systems and methods for use of optical odometry sensors in a mobile robot
JP6132659B2 (en) Ambient environment recognition device, autonomous mobile system using the same, and ambient environment recognition method
EP3132732B1 (en) Autonomous coverage robot
US20240085921A1 (en) System for obstacle detection
EP2623010A2 (en) Robot cleaner
JP6030405B2 (en) Planar detection device and autonomous mobile device including the same
WO2014064990A1 (en) Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference
JP5827508B2 (en) Obstacle detection device for vehicle and vehicle using the same
CN110471086B (en) Radar fault detection system and method
JP4539388B2 (en) Obstacle detection device
KR20060059270A (en) Corner detection method and apparatus therefor
CN111930106A (en) Mobile robot and control method thereof
JP7348414B2 (en) Method and device for recognizing blooming in lidar measurement
CN114911223B (en) Robot navigation method, device, robot and storage medium
JP6725982B2 (en) Obstacle determination device
CN114879690A (en) Scene parameter adjusting method and device, electronic equipment and storage medium
US20230225580A1 (en) Robot cleaner and robot cleaner control method
KR20210130478A (en) Electronic apparatus and controlling method thereof
WO2019244542A1 (en) Travel device, method for control of travel device, travel program, and recording medium
CN114488103A (en) Distance measuring system, distance measuring method, robot, device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220801

Address after: 201600 555 Guangfulin East Road, Songjiang District, Shanghai

Patentee after: SHANGHAI FLYCO ELECTRICAL APPLIANCE Co.,Ltd.

Address before: 518109 area 401f, building D, gangzhilong Science Park, 6 Qinglong Road, Qinghua community, Longhua street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN FEIKE ROBOT Co.,Ltd.

TR01 Transfer of patent right