CN111093019A - Terrain recognition, traveling and map construction method, equipment and storage medium - Google Patents

Terrain recognition, traveling and map construction method, equipment and storage medium Download PDF

Info

Publication number
CN111093019A
CN111093019A CN201911403757.4A CN201911403757A CN111093019A CN 111093019 A CN111093019 A CN 111093019A CN 201911403757 A CN201911403757 A CN 201911403757A CN 111093019 A CN111093019 A CN 111093019A
Authority
CN
China
Prior art keywords
terrain
laser
autonomous mobile
area
structured light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403757.4A
Other languages
Chinese (zh)
Inventor
唐宇存
单俊杰
谢凯旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201911403757.4A priority Critical patent/CN111093019A/en
Publication of CN111093019A publication Critical patent/CN111093019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides a terrain identification, advancing and map construction method, equipment and a storage medium. In the embodiment of the application, the terrain identification method is suitable for the autonomous mobile equipment, and the autonomous mobile equipment is provided with a structured light module which comprises a camera module and line laser transmitters distributed on two sides of the camera module; the method comprises the following steps: in the process of moving the autonomous mobile equipment, acquiring an environment image in a front area by using a structured light module, wherein the environment image comprises a laser line segment; identifying terrain information in a front area based on laser line segments in the environment image, wherein the terrain information comprises terrain height; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object. The method can improve the obstacle crossing capability of the autonomous mobile equipment and reduce the risk of the autonomous mobile equipment being trapped.

Description

Terrain recognition, traveling and map construction method, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a terrain recognition, progress and map construction method, apparatus, and storage medium.
Background
With the development of artificial intelligence technology, robots tend to be more intelligent. The robot can automatically navigate to a target area by means of certain artificial intelligence to complete corresponding tasks, so that the robot is more and more popular.
At present, robots are basically provided with sensors such as cameras and laser radars, the sensors collect environmental information around the robots, and the robots can be assisted to avoid obstacles in the navigation and walking processes. However, in practical applications, the phenomenon of robot entrapment still occurs, and a new solution is needed to reduce the risk of robot entrapment.
Disclosure of Invention
Aspects of the present application provide a terrain recognition, travel and map construction method, device and storage medium, to improve obstacle crossing capability of an autonomous mobile device and reduce a risk of the autonomous mobile device being trapped.
The embodiment of the application provides a terrain identification method, which is suitable for an autonomous mobile device, wherein the autonomous mobile device is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps: in the process of moving the autonomous mobile equipment, acquiring an environment image in a front area by using a structured light module, wherein the environment image comprises a laser line segment; identifying terrain information in a front area based on laser line segments in the environment image, wherein the terrain information comprises terrain height; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
The embodiment of the application also provides an environment map construction method, which is suitable for the autonomous mobile equipment, wherein the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps: in the process of traversing the working area, acquiring an environment image in the working area by using a structured light module, wherein the environment image comprises laser line segments; identifying the terrain position and height information thereof in the operation area based on the laser line segment in the environment image; constructing an environment map of the operation area according to the terrain position and the height information of the terrain position in the operation area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
The embodiment of the present application further provides a traveling method, which is applicable to an autonomous mobile device, and the method includes: determining that travel to a first area is required; planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map; travel to the first area along a navigation path to the first area.
The embodiment of the application also provides a traveling method, which is suitable for the autonomous mobile equipment, wherein the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps: in the advancing process, a structured light module is used for collecting an environment image in a front area, wherein the environment image comprises a laser line segment; identifying topographic information in a front area based on the laser line segments in the environment image, wherein the topographic information comprises a topographic height; continuing to travel based on the terrain information within the forward area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter; one or more memories for storing computer programs; one or more processors to execute a computer program to: in the process of moving the autonomous mobile equipment, acquiring an environment image in a front area by using a structured light module, wherein the environment image comprises a laser line segment; identifying terrain information in a front area based on laser line segments in the environment image, wherein the terrain information comprises terrain height; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter; one or more memories for storing computer programs; one or more processors to execute a computer program to: in the process of traversing the operation area, acquiring an environment image in the operation area, wherein the environment image comprises laser line segments; identifying the terrain position and height information thereof in the operation area based on the laser line segment in the environment image; constructing an environment map of the operation area according to the terrain position and the height information of the terrain position in the operation area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter; one or more memories for storing computer programs; one or more processors to execute a computer program to: determining that travel to a first area is required; planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map; travel to the first area along a navigation path to the first area.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter; one or more memories for storing computer programs; one or more processors to execute a computer program to: in the advancing process, a structured light module is used for collecting an environment image in a front area, wherein the environment image comprises a laser line segment; identifying topographic information in a front area based on the laser line segments in the environment image; performing travel control based on topographic information in the forward area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the terrain identification, environment mapping method, or steps in the method embodiments provided by the embodiments of the present application.
In the embodiment of the application, the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; by means of the structured light module, an environment image containing laser line segments in the front area is collected, and the terrain information, such as terrain position, height and/or contour and the like, in the front area of the main mobile device can be identified more accurately based on the laser line segments in the environment image. Furthermore, a more accurate environment map can be constructed based on a high-precision terrain recognition result, so that the accuracy of a navigation path planned for the autonomous mobile equipment based on the environment map can be improved, the autonomous mobile equipment can successfully cross obstacles, and the risk of the autonomous mobile equipment being trapped is reduced; or the autonomous mobile equipment is guided to continuously travel based on the high-precision terrain recognition result, the obstacle crossing capability of the autonomous mobile equipment can be improved, and the risk of the autonomous mobile equipment being trapped is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1a is a schematic diagram illustrating a structure of a structured light module according to an exemplary embodiment of the present disclosure;
FIG. 1b is a schematic diagram illustrating an installation state and an application state of the structured light module on the autonomous mobile device according to the embodiment shown in FIG. 1 a;
FIG. 2a is a schematic diagram of another structured light module provided in an exemplary implementation of the present application;
FIG. 2b is a schematic diagram illustrating a structure of another structured light module provided in an exemplary embodiment of the present application;
FIG. 2c is a schematic structural diagram of another structured light module according to an exemplary embodiment of the present disclosure;
FIGS. 3 a-3 e are front, bottom, top, rear, and exploded views, respectively, of the structured light module provided in the embodiment of FIG. 2 b;
FIG. 3f is a schematic diagram illustrating another structure of the structured light module according to the embodiment shown in FIG. 2 b;
FIG. 4a is a flowchart of a terrain recognition method provided in an exemplary embodiment of the present application;
FIG. 4b is a flow chart of one embodiment of step 41 in the examples of the present application;
FIG. 4c is a flow chart of another method of terrain recognition provided in an exemplary embodiment of the present application;
FIG. 4d is a flowchart of yet another terrain identification method provided in an exemplary embodiment of the present application;
FIG. 5a is a schematic diagram of a home type and a state in which a home service robot travels according to an exemplary embodiment of the present application;
fig. 5b is a schematic top view of the sweeping robot scanning the sliding door track using the structured light module according to the exemplary embodiment of the present disclosure;
FIG. 5c is a schematic high-level view of a sliding door track according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart of an environment mapping method provided by an exemplary embodiment of the present application;
FIG. 7a is a flow chart of a method of travel provided by an exemplary embodiment of the present application;
FIG. 7b is a flow chart of another method of travel provided by an exemplary embodiment of the present application;
FIG. 8 is a flow chart of yet another method of travel provided by an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing autonomous mobile device is basically provided with a sensor for acquiring surrounding environment information and assisting the autonomous mobile device to avoid obstacles in the navigation and traveling process, but in practical application, the phenomenon that the autonomous mobile device is trapped still occurs. In some embodiments of the present application, the autonomous mobile device uses a structured light module including a camera module and a line laser emitter, and the structured light module is used to collect an environment image in an area ahead of the autonomous mobile device, and the structured light module can collect topographic information with higher precision, or construct an environment map with higher precision and navigate the autonomous mobile device based on the environment map, or directly guide the autonomous mobile device to travel based on the topographic information with high precision, which is beneficial to improving the obstacle crossing capability of the autonomous mobile device and reducing the risk of the autonomous mobile device being trapped.
It is noted that the various methods provided by the embodiments of the present application may be implemented by an autonomous mobile device. In the embodiment of the present application, the autonomous moving apparatus may be any mechanical apparatus capable of performing a highly autonomous spatial movement in its environment, and for example, may be a robot, a cleaner, an unmanned vehicle, or the like. The robot may include a sweeping robot, a accompanying robot, a guiding robot, or the like. The explanation of the "autonomous mobile device" herein is applicable to all embodiments of the present application, and will not be repeated in the following embodiments.
Before describing the various methods provided in the embodiments of the present application in detail, a structured light module that can be used by an autonomous mobile device is described. In embodiments of the present application, an autonomous mobile device is equipped with a structured light module. The structured light module that this application embodiment used generally refers to any structured light module that contains line laser emitter and camera module. In the structured light module, a line laser transmitter is used for transmitting line laser outwards; the camera module is responsible for collecting the environmental image that line laser detected. Wherein, the line laser that line laser emitter launched is located the field of view scope of camera module, and line laser can help surveying information such as the profile, height and/or width of the object in the camera module field of view angle, and the camera module can gather the environment image that is detected by line laser. The camera module can acquire the environment image in the field angle.
The field angle of the camera module comprises a vertical field angle and a horizontal field angle. In this embodiment, the field angle of the camera module is not limited, and the camera module with a suitable field angle may be selected according to the application requirement. The line laser emitted by the line laser emitter is located in the field of view of the camera module, and the angle between a laser line segment formed on the surface of the object and the horizontal plane is not limited, for example, the line laser can be parallel to or perpendicular to the horizontal plane, and any angle can be formed between the line laser and the horizontal plane, and the line laser can be specifically determined according to application requirements.
In the embodiment of the present application, the implementation form of the line laser emitter is not limited, and may be any device/product form capable of emitting line laser. For example, the line laser transmitter may be, but is not limited to: and (3) a laser tube. The same is true. The realization form of the camera module is not limited. All visual equipment capable of acquiring environment images are suitable for the embodiment of the application. For example, a camera module may include, but is not limited to: a monocular camera, a binocular camera, etc.
In the embodiment of the present application, the wavelength of the line laser emitted by the line laser emitter is not limited, and the color of the line laser may be different, for example, red laser, violet laser, etc. Correspondingly, the camera module can adopt the camera module that can gather the line laser that line laser emitter launches. The camera module can also be an infrared camera, an ultraviolet camera, a starlight camera, a high-definition camera, etc., for example, adapted to the wavelength of the line laser emitted by the line laser emitter.
In the embodiment of the present application, the number of the line laser emitters is not limited, and may be, for example, one, or two or more. Similarly, the number of the camera modules is not limited, and may be, for example, one, or two or more. Of course, in the embodiment of the present application, the installation position, the installation angle, and the like of the line laser emitter, and the installation position relationship between the line laser emitter and the camera module, and the like, are not limited.
The following briefly describes the structures and working principles of several structured light modules that may be used in the embodiments of the present application with reference to fig. 1a to 3 f. It should be understood by those skilled in the art that the following list of structured light modules is merely illustrative and that the structured light modules that can be used in the embodiments of the present application are not limited to these examples.
As shown in fig. 1a, a structured light module 100 mainly includes: a line laser transmitter 101 and a camera module 102. Alternatively, the line laser transmitter 101 may be installed above, below, on the left side or on the right side of the camera module 102, as long as the line laser transmitted by the line laser transmitter 101 is located in the field of view of the camera module 102. In fig. 1a, a line laser transmitter 101 is shown as an example mounted above a camera module 102. As shown in fig. 1b, in the structured light module 100, a laser line segment formed by a laser surface emitted by the line laser emitter 101 hitting on an obstacle or a ground surface is horizontal to the ground and vertical to the advancing direction of the autonomous mobile device in front. This type of mounting may be referred to as horizontal mounting. Fig. 1b is a schematic diagram illustrating an installation state and an application state of the structured light module 100 on the autonomous mobile device.
As shown in fig. 1b, during the forward movement of the autonomous mobile apparatus, the structured light module 100 may be controlled to operate according to a certain manner, for example, periodically (every 20ms) to perform an environmental detection, so as to obtain a set of image data, where each image data includes a laser line segment formed by the line laser hitting the surface of the object or the ground, and a laser line segment includes a plurality of three-dimensional data, and the three-dimensional data on the laser line segments in a large number of environmental images may form three-dimensional point cloud data.
Further optionally, as shown in fig. 1a, the structured light module 100 may further include a main control unit 103, and the main control unit 103 may control the line laser transmitter 101 and the camera module 102 to operate. Optionally, the main control unit 103 controls exposure of the camera module 102 on one hand, and on the other hand, can control the line laser emitter 101 to emit line laser to the outside during exposure of the camera module 102, so that the camera module 102 collects an environment image detected by the line laser. In fig. 1a, the master control unit 103 is represented by a dashed box, which illustrates that the master control unit 103 is an optional unit.
As shown in fig. 2a, another structured light module 200a mainly includes: the camera module 201a and the line laser transmitters 202a distributed on two sides of the camera module 201 a. The structured light module 200a provided by the embodiment can be applied to an autonomous mobile device, the autonomous mobile device comprises a main controller, the main controller is respectively electrically connected with the camera module 201a and the line laser emitter 202a, and the camera module 201a and the line laser emitter 202a can be controlled to work.
Optionally, the main controller controls exposure of the camera module 201a on the one hand, and controls the line laser emitter 202a to emit line laser during exposure of the camera module 201a on the other hand, so that the camera module 201a collects an environment image detected by the line laser. The main controller may control the line laser transmitters 202a located at two sides of the camera module 201a to work simultaneously or alternatively, which is not limited herein.
As shown in fig. 2b, another structured light module 200b mainly includes: camera module 201b, the line laser transmitter 202b who distributes in camera module 201b both sides, and main control unit 203 b. The main control unit 203b is electrically connected with the camera module 201b and the line laser transmitter 202b, and can control the camera module 201b and the line laser transmitter 202b to work. The line laser transmitter 202b transmits line laser outwards under the control of the main control unit 203 b; the camera module 201b is used for collecting an environment image detected by the line laser under the control of the main control unit 203 b.
Optionally, the main control unit 203b performs exposure control on the one hand on the camera module 201b, and on the other hand, the control line laser emitter 202b emits line laser to the outside during the exposure of the camera module 201b, so that the camera module 201b collects an environment image detected by the line laser. The main control unit 203b may control the line laser transmitters 202b located at two sides of the camera module 201b to work simultaneously or alternatively, which is not limited herein. The main control unit 203b is further configured to provide the environment image to the autonomous mobile device, and in particular to the autonomous mobile device main controller, when the structured light module 200b is applied to the autonomous mobile device.
As shown in fig. 2c, another structured light module 200c mainly includes: the camera module 201c, the line laser transmitters 202c distributed on two sides of the camera module 201c, and the first control unit 203c and the second control unit 204 c. The first control unit 203c is electrically connected with the line laser emitter 202c, the second control unit 204c and the camera module 201c respectively; the camera module 201c is also electrically connected to the second control unit 204 c.
The second control unit 204c performs exposure control on the camera module 201c, and a synchronization signal generated by each exposure of the camera module 201c is output to the first control unit 203 c. The first control unit 203c controls the laser emitters 202c to alternately operate according to the synchronization signal and provides a laser source distinguishing signal to the second control unit 204 c; the second control unit 204c marks the environment image acquired by each exposure of the camera module 201c left and right according to the laser source distinguishing signal. The second control unit 204c is further configured to provide the marked environment image to the autonomous mobile device, and in particular to the autonomous mobile device main controller, in case the structured light module 200c is applied to the autonomous mobile device.
In the structured light module shown in fig. 2a to 2c, the total number of line laser emitters is not limited, and may be two or more, for example. The number of the line laser emitters distributed on each side of the camera module is not limited, and the number of the line laser emitters on each side of the camera module can be one or more; in addition, the number of the line laser emitters on the two sides can be the same or different. In fig. 2a to 2c, the camera module is illustrated by providing one line laser emitter on each side of the camera module, but the invention is not limited thereto. For another example, 2, 3 or 5 line laser emitters are arranged on the left side and the right side of the camera module.
In the structured light module shown in fig. 2a to 2c, the distribution of the line laser emitters on both sides of the camera module is not limited, and may be, for example, uniform distribution, non-uniform distribution, symmetrical distribution, or non-symmetrical distribution. Wherein, evenly distributed and inhomogeneous distribution can mean that it can be evenly distributed or inhomogeneous distribution to distribute between the line laser emitter of camera module with one side, of course, also can understand: the line laser transmitters distributed on the two sides of the camera module are uniformly distributed or non-uniformly distributed on the whole. For symmetric distribution and asymmetric distribution, the line laser emitters distributed on two sides of the camera module are symmetrically or asymmetrically distributed as seen from the whole. Symmetry here includes both the number of equivalents and the mounting location. For example, in the structured light module shown in fig. 2a to 2c, the number of the line laser emitters is two, and the two line laser emitters are symmetrically distributed on two sides of the camera module.
In the structured light module shown in fig. 2a to 2c, the installation position relationship between the line laser emitter and the camera module is not limited, and all the installation position relationships that the line laser emitter is distributed on two sides of the camera module are applicable to the embodiment of the present application. Wherein, the mounted position relation between line laser emitter and the camera module is relevant with the applied scene of structured light module. The installation position relation between the line laser transmitter and the camera module can be flexibly determined according to the application scene of the structured light module. The installation position relationship here includes the following aspects:
installation height: on the mounting height, line laser emitter and camera module can be located different heights. For example, the line laser emitters on the two sides are higher than the camera module, or the camera module is higher than the line laser emitters on the two sides; or the line laser transmitter on one side is higher than the camera module, and the line laser transmitter on the other side is lower than the camera module. Of course, the line laser transmitter and the camera module may be located at the same height. Preferably, the line laser transmitter and the camera module may be located at the same height. For example, in actual use, the structured light module may be mounted on a device (e.g., an autonomous mobile device such as a robot, a purifier, an unmanned vehicle, etc.), in which case the line laser emitter and the camera module are located at the same distance from a work surface (e.g., a floor) on which the device is located, e.g., 47mm, 50mm, 10cm, 30cm, or 50cm, etc.
Installation distance: the mounting distance is the mechanical distance (alternatively referred to as the baseline distance) between the line laser transmitter and the camera module. The mechanical distance between the line laser transmitter and the camera module can be flexibly set according to the application requirement of the structured light module. The size of the measurement blind area can be determined to a certain extent by information such as a mechanical distance between the line laser transmitter and the camera module, a detection distance required to be met by equipment (such as a robot) where the structured light module is located, the diameter of the equipment and the like. For the equipment (such as a robot) where the structural optical module is located, the diameter of the structural optical module is fixed, and the mechanical distance between the measurement range and the line laser transmitter and the camera module can be flexibly set according to requirements, which means that the mechanical distance and the blind area range are not fixed values. On the premise of ensuring the measurement range (or performance) of the equipment, the range of the blind area should be reduced as much as possible, however, the larger the mechanical distance between the line laser transmitter and the camera module is, the larger the controllable distance range is, which is beneficial to better control of the size of the blind area.
In some application scenarios, the structured light module is applied to a sweeping robot, and may be mounted on a striking plate or a robot body of the sweeping robot, for example. For the sweeping robot, a reasonable mechanical distance range between the line laser emitter and the camera module is given as an example. For example, the mechanical distance between the line laser transmitter and the camera module may be greater than 20 mm. Further optionally, the mechanical distance between the line laser transmitter and the camera module is greater than 30 mm. Furthermore, the mechanical distance between the line laser transmitter and the camera module is larger than 41 mm. It should be noted that the mechanical distance range given here is not only applicable to the application of the structured light module to the sweeping robot, but also applicable to the application of the structured light module to other devices with the specification and size closer to or similar to that of the sweeping robot.
Emission angle: the emission angle refers to an included angle between a central line of the line laser emitted by the line laser emitter and an installation base line of the line laser emitter after the line laser emitter is installed. The installation baseline refers to a straight line where the line laser module and the camera module are located under the condition that the line laser module and the camera module are located at the same installation height. In the present embodiment, the emission angle of the line laser transmitter is not limited. The emission angle is related to the detection distance required by the equipment (such as a robot) where the structured light module is located, the radius of the equipment, and the mechanical distance between the line laser emitter and the camera module. Under the condition that the detection distance required to be met by equipment (such as a robot) where the structured light module is located, the radius of the equipment and the mechanical distance between the line laser transmitter and the camera module are determined, the transmitting angle of the line laser transmitter can be directly obtained through a trigonometric function relation, namely the transmitting angle is a fixed value.
Of course, if a specific emitting angle is required, the emitting angle can be adjusted by adjusting the detecting distance required to be satisfied by the device (such as a robot) where the structured light module is located and the mechanical distance between the line laser emitter and the camera module. In some application scenarios, in the case that the detection distance and the radius of the device (e.g. robot) where the structured light module is located need to satisfy are determined, the emission angle of the line laser emitter can be changed within a certain angle range by adjusting the mechanical distance between the line laser emitter and the camera module, for example, can be 50-60 degrees, but is not limited thereto.
In order to facilitate the use, the structured light module provided by the embodiment of the application further comprises some bearing structures for bearing the camera module and the line laser emitter besides the camera module and the line laser emitter. The bearing structure may have various implementations, which are not limited thereto. In some optional embodiments, the bearing structure includes a fixing base, and further may include a fixing cover used in cooperation with the fixing base. Taking the structured light module 200b shown in fig. 2b as an example, the structure of the structured light module with the fixing base and the fixing cover will be described with reference to fig. 3 a-3 e. Fig. 3a to 3e are a front view, a bottom view, a top view, a rear view and an exploded view of the structured light module 200b, wherein each view does not show all the components due to the view angle, so only some of the components are labeled in fig. 3a to 3 e. As shown in fig. 3 a-3 e, the structured light module 200b further includes: a fixed seat 204 b. The camera module and the line laser transmitter are assembled on the fixing base 204 b.
Further optionally, as shown in fig. 3e, the fixing seat 204b includes: a main body 205b and end portions 206b located on both sides of the main body 205 b; wherein the camera module is assembled on the main body portion 205b, and the line laser transmitter is assembled on the end portion 206 b; the end face of the end portion 206b faces the reference surface, so that the center line of the line laser emitter and the center line of the camera module intersect at one point; the reference plane is a plane perpendicular to the end surface or the tangent to the end surface of the main body portion 205 b.
In an alternative embodiment, in order to facilitate fixing and reduce the influence of the device on the appearance of the structural optical module, as shown in fig. 3e, a groove 208b is formed in the middle of the main body 205b, and the camera module is installed in the groove 208 b; the end portion 206b is provided with a mounting hole 209b, and the line laser transmitter is mounted in the mounting hole 209 b. Further optionally, as shown in fig. 3e, the structured light module 200b is further equipped with a fixing cover 207b above the fixing base 204 b; a cavity is formed between the fixing cover 207b and the fixing base 204b to accommodate a connecting line of the camera module and the line laser transmitter. The fixing cover 207b and the fixing base 204b can be fixed by a fixing member. In fig. 3e, the fixing member is illustrated by taking the screw 210b as an example, but the fixing member is not limited to the screw implementation.
In an optional embodiment, the lens of the camera module is located inside the outer edge of the groove 208b, i.e. the lens is retracted inside the groove 208b, so that the lens can be prevented from being scratched or knocked, and the protection of the lens is facilitated.
In the embodiment of the present application, the shape of the end surface of the main body 205b is not limited, and may be, for example, a flat surface, or a curved surface recessed inward or outward. The shape of the end surface of the main body portion 205b varies depending on the device in which the structured light module is installed. For example, assuming that the structural light module is applied to an autonomous mobile device whose outline is circular or elliptical, the end surface of the main body portion 205b may be implemented as an inwardly recessed curved surface that is adapted to the outline of the autonomous mobile device. If the configuration optical module is applied to an autonomous mobile device having a square or rectangular outline, the end surface of the main body 205b may be implemented as a plane that is adapted to the outline of the autonomous mobile device. The autonomous mobile equipment with the circular or oval outline can be a sweeping robot, a window cleaning robot and the like with the circular or oval outline. Accordingly, the autonomous moving apparatus having a square or rectangular outer contour may be a sweeping robot, a window cleaning robot, or the like having a square or rectangular outer contour.
In an alternative embodiment, for an autonomous mobile device with a circular or elliptical outline, the structured light module is mounted on the autonomous mobile device, and in order to match the appearance of the autonomous mobile device more and maximize the utilization of the space of the autonomous mobile device, the radius of the curved surface of the main body 205b is the same as or approximately the same as the radius of the autonomous mobile device. For example, if the outline of the autonomous moving apparatus is circular and the radius range is 170mm, when the structured light module is applied to the autonomous moving apparatus, the radius of the curved surface of the main body portion may be 170mm or approximately 170mm, for example, may be in the range of 170mm to 172mm, but is not limited thereto.
Further, in the case that the structured light module is applied to an autonomous mobile device with a circular or elliptical outline, the emission angle of the line laser emitter in the structured light module is mainly determined by the detection distance required by the autonomous mobile device, the radius of the autonomous mobile device, and the like. Under this scene, the terminal surface or the terminal surface tangent line of the main part of structured light module are parallel with the installation baseline, therefore the emission angle of line laser emitter also can be defined as: the included angle between the central line of the line laser emitted by the line laser emitter and the end surface or the tangent of the end surface of the main body part. In some application scenarios, the range of emission angles of the line laser transmitter may be implemented as 50-60 degrees with the detection range and radius determination of the autonomous mobile device, but is not limited thereto.
The structured light module that above-mentioned embodiment of this application provided, stable in structure, size are little, agree with the complete machine outward appearance, have greatly saved the space, can support multiple type autonomic mobile device.
Further, the structured light module shown in fig. 1a and fig. 2a to 2c may further include a laser driving circuit. The laser driving circuit is electrically connected with the line laser transmitter and is mainly used for amplifying a control signal sent to the line laser transmitter. In the structured light module shown in fig. 2a to 2c, the number of laser driving circuits is not limited. Different laser transmitters can share one laser driving circuit, and one line laser transmitter can correspond to one laser driving circuit. Preferably, one line laser transmitter corresponds to one laser driving circuit. In fig. 3f, a structured light module 200b is illustrated as an example, in which one line laser emitter 202b corresponds to one laser driving circuit 211b in the structured light module 200 b. In fig. 2c, the laser driving circuit 211b is mainly used for amplifying the control signal sent by the main control unit 203b to the line laser transmitter 202b, and providing the amplified control signal to the line laser transmitter 202b to control the line laser transmitter 202 b. In the embodiment of the present application, the circuit structure of the laser driving circuit 211b is not limited, and any circuit structure that can amplify a signal and provide the amplified signal to the line laser transmitter 202b is suitable for the embodiment of the present application.
The above-described structures of several structured light modules are applicable to the following embodiments of the method of the present application, and the detailed descriptions of the methods provided in the embodiments of the present application will be provided below with reference to fig. 4a to 9. Before the description, the "topographic information" related to the embodiments of the present application will be explained. In the embodiments of the present application, the topographic information indicates various forms of the surface in the work area or the front area, and specifically indicates various states of the undulations commonly exhibited by the fixed objects distributed over the surface. The topographic information in this embodiment includes, but is not limited to: terrain position, terrain height, terrain width, and/or terrain contour within the forward region, etc. For example, in a home environment, the terrain information mainly refers to the height, width, and/or contour of terrain such as push-pull bars, sliding door tracks, doorsills, steps, and the like on the ground.
Fig. 4a is a flowchart of a terrain identification method according to an exemplary embodiment of the present application. The method is suitable for the autonomous mobile equipment, the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter. For the detailed structure and the operation principle of the structured light module, refer to the foregoing embodiments. As shown in fig. 4a, the method comprises the steps of:
40. and in the process of moving the autonomous mobile equipment, acquiring an environment image in a front area by using the structured light module, wherein the environment image comprises laser line segments. The laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
41. And identifying the terrain information in the front area based on the laser line segments in the environment image.
In this embodiment, autonomic mobile device installs the structured light module, and structured light module includes camera module and line laser emitter. In the process of moving the autonomous mobile equipment, on one hand, a line laser transmitter in the structured light module can be controlled to emit line laser outwards, and the line laser can be reflected back after encountering objects in the front area; on the one hand, the camera module in the control structure light module collects the environmental image in the front area. During this period, if the object in the preceding region that line laser detected, can form the laser line section on the object surface, this laser line section can be gathered by camera module and can be gathered by the camera module, in other words, can contain the laser line section that forms after meeting the object by the line laser that line laser emitter launched in the environment image that the camera module was gathered.
The front area refers to a range that can be recognized by the autonomous mobile device along the traveling direction during the operation of the autonomous mobile device, the environmental image of the front area changes along with the traveling of the autonomous mobile device, and the image information of the front area of the autonomous mobile device is different in different operation areas.
Further, according to the laser line segments in the collected environment image, the terrain information in the front area can be identified. Wherein, a laser line section contains a plurality of pixel, and every pixel corresponds a topography position in the region in front. Further, pixel points on the laser line segments in a large number of environment images can form point cloud data corresponding to the terrain; and obtaining the terrain information in the front area based on the point cloud data corresponding to the terrain. In the present embodiment, the topographic information indicates various states of undulations commonly exhibited by fixed objects distributed above the ground in the front area. The topographic information in this embodiment includes, but is not limited to: terrain position, terrain height, terrain width, and/or terrain contour within the forward region, etc. Since the line laser detection precision is very high, small changes of the terrain, such as small differences of the terrain in height, width, contour, etc., can be detected, and therefore the terrain information identified by the present embodiment has higher precision. Furthermore, when the autonomous mobile device builds a map or travels based on terrain information, the obstacle crossing capability of the autonomous mobile device is improved, and the risk that the autonomous mobile device is trapped in the traveling process is reduced.
The installation angle and the installation direction of the line laser transmitter on the autonomous mobile equipment influence the transmission angle of the line laser transmitter, the transmission angles are different, and the positions and the directions of laser line segments formed after the transmitted line laser meets an object in an environment image are different. Based on the above, after the environment image is collected by the structured light module, the autonomous mobile device can combine the installation position, direction and other information of the line laser emitter on the autonomous mobile device in the process of identifying the terrain information in the front area according to the information of the laser line segment in the environment image. The information such as the installation position and the direction of the line laser transmitter on the autonomous mobile equipment can be reflected in the conversion relation between the sensor coordinate system where the structured light module is located and the world coordinate system.
Based on the above analysis, in some embodiments, as shown in fig. 4b, identifying topographical information within the forward region based on the laser line segments in the environmental image includes: 410. calculating the position and the length of the laser line segment in the environment image based on an image recognition technology; 411. and according to the position and the length of the laser line segment in the environment image, and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system, calculating the terrain information such as the terrain position, the terrain height and/or the terrain contour corresponding to the laser line segment.
The laser line segment comprises a plurality of pixel points, and the position of the laser line segment in the environment image actually refers to the positions of the pixel points on the laser line segment in the environment image, and is generally represented by pixel coordinates. Based on image processing and other technologies, the pixel coordinates of each pixel point on the laser line segment in the environment image can be calculated. The pixel coordinates of the pixels are calculated under a sensor coordinate system of the camera module, and can be converted into a world coordinate system by combining the conversion relationship between the sensor coordinate system of the camera module and an equipment coordinate system of the autonomous mobile equipment and the conversion relationship between the equipment coordinate system and the world coordinate system, and the position coordinates under the world coordinate system are the terrain positions detected by the line laser. The conversion relationship between the sensor coordinate system and the equipment coordinate system can be determined to a certain extent by the installation position relationship between the line laser emitter and the camera module in the structured light module, the structural parameters of the structured light module installed on the autonomous mobile equipment and the like.
In addition, according to the position of each pixel point on the laser line segment in the environment image, the height and other information of the pixel point from the working surface (such as the ground, the desktop or the glass surface) where the autonomous mobile device is located can be calculated, and the height information of the pixel point is the height information in the environment image. In the process of converting the coordinate system, the height information of the pixel points is converted into the world coordinate system, so that the terrain height corresponding to the corresponding terrain position is formed. Further alternatively, after obtaining the terrain position and the terrain height corresponding to each terrain position, other terrain information such as a terrain profile, a terrain width, and the like can be obtained.
Further optionally, the distance information of the terrain detected by the line laser from the autonomous mobile device can also be calculated by using the flight time of the line laser.
In the embodiments of the present application, the operation of the line laser transmitter in the structured light module is not limited. In some embodiments, during the traveling process of the autonomous mobile device, the line laser emitters on two sides of the camera module may be controlled to operate simultaneously, that is, during the exposure of the camera module, the line laser emitters on two sides of the camera module emit line laser outwards at the same time, and at this time, the camera module operates in a full-width mode, that is, an environmental image captured by the camera module includes both environmental information in a front left area of the autonomous mobile device and environmental information in a front right area of the autonomous mobile device. Further optionally, in the simultaneous operation mode, the line laser emission directions of the line laser emitters located at both sides of the camera module intersect within the field of view of the camera module. Furthermore, the line laser emission directions of the line laser emitters positioned on the two sides of the camera module are intersected with the central line of the camera module.
In other embodiments, during the moving process of the autonomous mobile device, the line laser emitters on the two sides of the camera module may be controlled to work alternately, that is, during each exposure of the camera module, only one of the line laser emitters on the two sides of the camera module emits line laser outwards, and the line laser emitters on the two sides work alternately, at this time, the camera module works in a half-width mode, and the half-width mode is related to the line laser emitter currently in a working state. For example, assuming that the left line laser transmitter is operated during the current exposure, the camera module is operated in the right half mode, that is, the environment image captured by the camera module includes the environment information in the front right area of the autonomous mobile device. For another example, if the right line laser transmitter is operated during the current exposure, the camera module is operated in the left half mode, that is, the environment image captured by the camera module includes the environment information in the area in front of the autonomous mobile device.
Specifically, exposure control can be performed on the camera module, and the camera module generates a synchronous signal during each exposure; and controlling the line laser transmitters on two sides of the camera module to work simultaneously or alternatively according to a synchronous signal generated by each exposure of the camera module. For example, at the same time of every exposure of the camera module, the line laser emitters on two sides are controlled to emit line lasers simultaneously, the two line lasers are intersected at a certain position point in the field range of the camera module and continue to extend forwards until the line lasers are projected onto an object after the two line lasers are intersected, the camera module collects an environment image in the front area during the exposure, if the line lasers reach the surface of the object in the field range of the camera module, the environment image can be collected by the camera module, namely the environment image comprises two laser line segments, the positions and/or the lengths of the two laser line segments are different, and accordingly the topographic information in the front area can be identified according to the laser line segments in the collected environment image. Or when the camera module is exposed for one time, controlling the left line laser emitter to emit line laser, acquiring an environment image in a front area by the camera module during exposure, and acquiring the environment image by the camera module if the line laser reaches the surface of an object in the field range of the camera module, namely the environment image comprises a laser line segment; when the camera module is exposed for the next time, the right side line laser emitter is controlled to emit line laser, the camera module collects an environment image in a front area during exposure, and if the line laser reaches the surface of an object in the field range of the camera module, the line laser is collected by the camera module, namely the environment image comprises a laser line segment; and then, forming a complete environment image by the environment images acquired every two times, and identifying the topographic information in the front area according to the laser line segments in the complete environment image.
In this embodiment, in the alternating operation mode, after each time the environmental image is acquired, the method further includes: and marking the laser source corresponding to the environment image. Wherein, the laser source is a line laser emitter positioned at the left side or the right side of the camera module, and the marking result can indicate whether the environment image is a left half-good image or a right half-good image.
The laser source corresponding to the environment image is marked aiming at the environment image acquired by each exposure of the camera module, the corresponding relation between the laser line segment in the currently acquired environment image and the laser source can be accurately judged, and the method is favorable for accurately identifying the topographic information in the front area by combining the installation parameters of the line laser emitter and the laser line segment in the environment image. The environment image and the marked laser source are collected for multiple times, a terrain analysis data set of a front area can be formed, an accurate and effective auxiliary effect is achieved on construction of an environment map and/or traveling navigation of the autonomous mobile device, and the operation effect of the autonomous mobile device is improved.
In this embodiment, as shown in fig. 4c, after identifying the terrain information in the front area, the method of this embodiment further includes the following steps:
430. and determining the terrain position with the height lower than the obstacle crossing height of the autonomous mobile equipment in the front area according to the terrain height in the terrain information.
431. The autonomous mobile device is guided to continue traveling through the forward area from a terrain location having an elevation below an obstacle crossing elevation of the autonomous mobile device.
After the autonomous mobile equipment identifies the terrain information of the front area through the acquired environment image, by means of the advantage of higher precision of the terrain information, the position of the terrain higher than the obstacle crossing height of the autonomous mobile equipment and the position of the terrain lower than the obstacle crossing height of the autonomous mobile equipment in the terrain of the front area can be accurately judged, and the autonomous mobile equipment can continue to travel through the front area from the position of the terrain lower than the obstacle crossing height according to the difference of the terrain heights so as to ensure safer and more effective operation.
In this embodiment, as shown in fig. 4d, further optionally, after identifying the terrain information of the front area, the method of this embodiment further includes a step 43 of adding the terrain height in the terrain information to the corresponding terrain position in the environment map.
Specifically, the terrain heights of all terrain positions can be obtained through analysis from the terrain information, and then the terrain heights of all the terrain positions in the terrain information can be added to corresponding terrain positions in an environment map, so that the environment map contains richer information, richer information is provided for the traveling navigation of the autonomous mobile device, the obstacle crossing capability of the autonomous mobile device during traveling based on the environment map navigation is improved, and the trapped risk of the autonomous mobile device is reduced. For example, when the autonomous mobile device passes through the same position again, whether the terrain is lower than the obstacle crossing height of the autonomous mobile device can be judged according to the marked terrain height in the environment image, and if the terrain is lower than the obstacle crossing height of the autonomous mobile device, the autonomous mobile device can pass through the obstacle crossing height; conversely, it means that it is not passable here; under the condition of passing, the robot can continue to move through the position, so that the operation effect is improved; under the condition of no passing, the route can be planned again or alarm prompt information can be output to prompt the people to be trapped, so that the situation that the people are trapped due to forced passing can be avoided, and the risk of trapping can be reduced.
In the embodiment of the terrain identification method, the autonomous mobile equipment can acquire the environment image of the front area by means of the structured light module comprising the line laser emitter, and can accurately identify the terrain information in the front area by means of the advantage of very high line laser detection precision, so that the judgment of the terrain position higher than the obstacle crossing height of the autonomous mobile equipment and the terrain position lower than the obstacle crossing height of the autonomous mobile equipment is facilitated, the terrain position lower than the obstacle crossing height can be selected to continue traveling operation through the front area, the safety of the autonomous mobile equipment in the operation process is improved, the obstacle crossing capability of the autonomous mobile equipment is improved, and the risk of the autonomous mobile equipment being trapped is reduced.
For convenience of understanding, the terrain recognition method provided by the embodiment of the present application is described in detail by taking an autonomous mobile device as an example of a home service robot, and combining a scenario in which the home service robot executes a task in a home environment.
Application scenario example 1:
the home service robot mainly works in a home environment. As shown in fig. 5a, which is a household type diagram that is relatively common in real life, the working area of the home service robot may be a main lying area, a sub lying area, a living room, a kitchen area, a toilet area, a balcony area, and the like, and different areas are connected by a door, such as a sliding door or a sliding door, and normally, a threshold stone is installed below the sliding door, a track is installed below the sliding door, and the threshold stone and the track separate the different areas significantly. However, due to the factors such as the flatness of the ground, the installation accuracy and the process, the terrain has a certain inclination, so that the heights of positions on the threshold stone and the rail are higher than the average height, and the heights of the positions are lower than the average height.
Specifically, taking a floor sweeping robot as an example, when the floor sweeping robot is going to a living room for cleaning work during the main lying work, the floor sweeping robot must pass through a sliding door, and similarly, when the floor sweeping robot is going to a kitchen for cleaning work during the living room work, the floor sweeping robot must pass through a sliding door. In a real scenario, the heights of the terrains are not uniform, some are lower than the obstacle crossing height of the robot, and some are higher than the obstacle crossing height of the robot. When the sweeping robot goes across a sliding door and works on uneven ground in each area, obstacle-crossing difficulty and a phenomenon of being trapped can exist. If these minor differences in terrain can be identified during navigation, terrain below the obstacle crossing height of the robot can be selected for passage, thereby reducing the risk of the robot getting trapped. In this embodiment, the structured light module is installed to the front side of robot of sweeping the floor, and structured light module includes the camera module and distributes in the line laser emitter of camera module both sides. Fig. 5b shows a top view of the sweeping robot with the structured light module, where 10 is a track of the sliding door, and 20 is the sweeping robot.
In this embodiment, when the home service robot needs to go from the living room to the kitchen, it is necessary to pass through the door-moving track separating two different areas, for example, if the sweeping robot is performing the cleaning task and if the cleaner is performing the air cleaning task, as shown in fig. 5 a. In the process, the family service robot can control a line laser transmitter in the structured light module to transmit line laser during the exposure of the camera module, and the line laser is vertical to the ground in order to collect the height of the track of the front sliding door; and controlling the camera module to shoot images including the track of the sliding door in the front area. Along with the continuous movement of the family service robot, the height information of each position of the track of the sliding door can be collected. The track height information of the sliding door detected by the camera module is shown in fig. 5 c. As can be seen from fig. 5c, the heights of different positions of the track of the sliding door are different, and fig. 5c shows that the height of one sliding door track is 2cm, and the height of the other sliding door track is 2.5 cm. Then, the home service robot can judge whether a position lower than the obstacle crossing height exists on the track of the sliding door by combining the obstacle crossing height of the home service robot; assuming that the obstacle crossing height of the home service robot is 2.2cm, it is known that a position lower than the obstacle crossing height exists on the track of the sliding door, and the user enters the kitchen from the position lower than the obstacle crossing height on the track of the sliding door to perform a work task in the kitchen, as shown in fig. 5 a. Optionally, if the sweeping robot is used, a sweeping task can be executed in a kitchen; if the air purifier is used, an air purification task can be executed in a kitchen.
Of course, if it is determined that the position lower than the obstacle crossing height does not exist on the track of the sliding door, the home service robot may send an alarm prompt message to prompt the user to manually move the sliding door into the kitchen so as to continue to execute the task in the kitchen. Or, under the condition that the position lower than the obstacle crossing height does not exist on the track of the sliding door, the home service robot can also firstly judge whether other areas needing to execute the task exist, if the other areas needing to execute the task exist, the home service robot can go to other areas capable of entering in advance to continue executing the task, and finally, alarm prompt information is sent out to prompt a user to manually move the home service robot into a kitchen; if other areas needing to execute the tasks do not exist, alarm prompt information can be sent out, so that the user can manually move the kitchen into the kitchen. It should be noted that, the process of the home service robot going to other areas where tasks need to be performed is similar to the process of going from the living room to the kitchen, and is not described in detail.
Further, if the home service robot supports multiple obstacle crossing heights, when it is judged that a position lower than the current obstacle crossing height does not exist on the track of the sliding door, the obstacle crossing height can be increased until a position capable of passing through appears on the track of the sliding door. Or, if the home service robot is adjusted to the maximum obstacle crossing height supported by the home service robot, but the position lower than the maximum obstacle crossing height still does not exist on the track of the sliding door, an alarm prompt message can be sent out to provide the user to manually move the home service robot into the kitchen.
The embodiment of the application provides an environment map construction method besides a terrain identification method. The environment map construction method is also suitable for the autonomous mobile equipment provided with the structured light module. Fig. 6 is a flowchart of an environment mapping method according to an exemplary embodiment of the present application, where as shown in fig. 6, the method includes:
60. in the process of traversing the operation area, the structured light module is used for collecting an environment image in the operation area; wherein, the environment image comprises laser line segments. The laser line segment is formed after line laser emitted by the line laser emitter meets an object.
61. And identifying the terrain position and the height information thereof in the working area based on the laser line segments in the environment image.
62. And constructing an environment map of the working area according to the terrain position and the height information of the terrain position in the working area.
In an embodiment of the present application, the autonomous mobile device is equipped with a structured light module, which includes a camera module and a line laser transmitter. In the process that the autonomous mobile equipment traverses the operation area to advance, on one hand, a line laser transmitter in the structured light module is controlled to emit line laser to the outside, and the line laser is reflected back after encountering objects in the operation area; on the one hand, the camera module in the control structure optical module collects the environment image in the operation area. During this period, if the object in the operation area that line laser detected can form the laser line section on the object surface, this laser line section can be gathered by the camera module, in other words, can contain the laser line section that forms after meeting the object by the line laser that line laser emitter sent out in the environment image that the camera module was gathered.
Further, according to the laser line segments in the collected environment image, the topographic information in the operation area can be identified. Wherein, a laser line section contains a plurality of pixel, and every pixel corresponds a topography position in the operation region. With traversal of the operation area, a large number of environment images can be collected, and pixel points on laser line segments in the large number of environment images can form point cloud data corresponding to the operation area; and obtaining the topographic information in the working area based on the point cloud data corresponding to the working area. In the present embodiment, the topographic information indicates various states of undulations commonly exhibited by fixed objects distributed above the ground in the working area. The topographic information in this embodiment includes, but is not limited to: terrain position, terrain height, terrain width, and/or terrain profile within the work area, etc.
Generally, the installation angle and the installation direction of a line laser transmitter on an autonomous mobile device influence the transmission angle, the transmission angles are different, and the positions and the directions of laser line segments formed after the transmitted line laser meets an object in an environment image are different. Based on the method, after the environment image is collected by the aid of the structured light module, the autonomous mobile equipment can combine information such as the installation position and the direction of the line laser emitter on the autonomous mobile equipment in the process of identifying terrain information in the working area according to information of the laser line segments in the environment image. Information such as the installation position and the direction of the line laser transmitter on the autonomous mobile device can be reflected in the conversion relation between the coordinate system used by the structured light module and the world coordinate system.
Based on the above analysis, one embodiment of identifying topographical information within a work area based on laser line segments in an environmental image includes the steps of: calculating the position and the length of the laser line segment in the environment image based on an image recognition technology; and according to the position and the length of the laser line segment in the environment image, and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system, calculating the terrain information such as the terrain position, the terrain height and/or the terrain contour corresponding to the laser line segment.
The laser line segment comprises a plurality of pixel points, and the position of the laser line segment in the environment image actually refers to the positions of the pixel points on the laser line segment in the environment image, and is generally represented by pixel coordinates. Based on image processing and other technologies, the pixel coordinates of each pixel point on the laser line segment in the environment image can be calculated. The pixel coordinates of the pixel points are calculated under a coordinate system used by the camera module, and can be converted into a world coordinate system by combining the conversion relation between the coordinate system used by the camera module and the world coordinate system, and the position coordinates under the world coordinate system are the terrain positions detected by the line laser. The conversion relationship between the sensor coordinate system and the equipment coordinate system can be determined to a certain extent by the installation position relationship between the line laser emitter and the camera module in the structured light module, the structural parameters of the structured light module installed on the autonomous mobile equipment and the like.
In addition, according to the position of each pixel point on the laser line segment in the environment image, the height and other information of the pixel point from the working surface (such as the ground, the desktop or the glass surface) where the autonomous mobile device is located can be calculated, and the height information of the pixel point is the height information in the environment image. In the process of converting the coordinate system, the height information of the pixel points is converted into the world coordinate system, so that the terrain height corresponding to the corresponding terrain position is formed. Further alternatively, after obtaining the terrain position and the terrain height corresponding to each terrain position, other terrain information such as a terrain profile, a terrain width, and the like can be obtained.
After the topographic information in the work area is obtained, an environment map corresponding to the work area can be constructed according to the topographic information in the work area. When the environment map is constructed, the environment map is mainly constructed according to the terrain position and the corresponding terrain height.
Alternatively, an initial map corresponding to the work area may be acquired, and the initial map may be constructed in real time according to environmental information collected by other sensors (e.g., an LDS sensor or a visual sensor) on the autonomous mobile device, or may be constructed in advance. The coordinates of each position point within the work area are included in the initial map. Based on the above, the topographic information collected by the structured light module can be added to the initial map according to the coordinates of each position point in the working area contained in the initial map, so as to obtain the environment map corresponding to the working area. For example, height information corresponding to the topographic position may be added to the initial map according to the topographic position within the work area.
Alternatively, the first and second electrodes may be,
optionally, if the structured light module acquires other environmental information, such as the position of an obstacle and the boundary in the working area, in addition to the topographic information, the initial map of the working area may also be constructed directly according to the other environmental information acquired by the structured light module; and then, adding the topographic information acquired by the structured light module into the initial map to obtain an environment map corresponding to the working area. For example, height information corresponding to the topographic position may be added to the initial map according to the topographic position within the work area.
The embodiment of the present application does not limit the implementation form of the environment map, and may be a grid map, a vector map, or the like.
Because line laser detection precision is very high, can detect the slight change of topography, for example can detect the slight difference of topography in aspects such as height, width or profile, the topographic information precision that consequently this embodiment discerned is higher, and then adds the topographic information to the environment map, is favorable to richening the information that the environment map contains, improves the precision of environment map. Therefore, the autonomous mobile equipment is favorable for acquiring richer terrain information when navigating based on the environment map, and travels based on the richer terrain information, so that the obstacle crossing capability of the autonomous mobile equipment can be improved, and the risk of being trapped in the traveling process of the autonomous mobile equipment is reduced.
For example, when the autonomous mobile device passes through the same position again, whether the terrain is lower than the obstacle crossing height of the autonomous mobile device can be judged according to the marked terrain height in the environment image, and if the terrain is lower than the obstacle crossing height of the autonomous mobile device, the autonomous mobile device can pass through the obstacle crossing height; conversely, it means that it is not passable here; under the condition of passing, the robot can continue to move through the position, so that the operation effect is improved; under the condition of no passing, the route can be planned again or alarm prompt information can be output to prompt the people to be trapped, so that the situation that the people are trapped due to forced passing can be avoided, and the risk of trapping can be reduced.
For convenience of understanding, the terrain recognition method provided by the embodiment of the present application is described in detail by taking an autonomous mobile device as an example of a home service robot, and combining a scenario in which the home service robot executes a task in a home environment.
Application scenario example 2:
the home service robot mainly works in a home environment. As shown in fig. 5a, which is a household type diagram that is relatively common in real life, the working area of the home service robot may be a main lying area, a living room, a secondary lying area, a kitchen area, a bathroom area, a balcony area, and the like, and different areas are connected by a door, such as a sliding door or a sliding door. However, due to the factors such as the flatness of the ground, the installation accuracy and the process, the terrain has a certain inclination, so that the heights of positions on the threshold stone and the rail are higher than the average height, and the heights of the positions are lower than the average height.
Specifically, during the process of traveling from the main horizontal position to the living room in the working area of the home service robot, the home service robot must pass through the sliding door, and similarly, during the process of traveling from the living room to the kitchen, the home service robot must pass through the sliding door. Therefore, before the home service robot travels from one working area to another working area, whether obstacles exist in the front area, such as a threshold stone or a part of a track higher than the obstacle crossing height of the home service robot, is judged by identifying topographic information of the front area, so as to ensure whether the obstacles can safely pass through.
In this embodiment, in order to improve the work efficiency of the home service robot, before starting to execute a work task, the home service robot may be placed in the home environment shown in fig. 5a, the home server robot traverses the home environment, collects environment information in the home environment, and constructs an environment map corresponding to the home environment according to the collected environment information. Or, in the process that the home service robot executes the job task in the home environment for the first time, the home service robot acquires the environment information in the home environment while executing the job task, and constructs an environment map corresponding to the home environment according to the acquired environment information.
In this embodiment, the front side of the home service robot is installed with a structured light module, which includes a camera module and line laser emitters distributed on both sides of the camera module. In the process of traversing a home environment or in the process of executing operation, the home service robot can control a line laser transmitter in the structured light module to transmit line laser during the exposure of the camera module, so that the height information of a sliding door track, a push-pull strip and the like on the ground can be conveniently acquired, and a laser line segment formed by the line laser on the surface of an object is vertical to the ground; and controlling the camera module to shoot the environment image in the front area. As the home service robot moves continuously in the home environment, topographic information of various positions in the home environment, such as the height and width of a push-pull strip below a sliding door, the height and width of a sliding door track, and the like, can be collected.
The home service robot of the present embodiment is further provided with an LDS sensor or a vision sensor. In the process of traversing the home environment, other environment information in the home environment, such as objects and positions of objects contained in each area of a living room, a main bed, a secondary bed, a kitchen, a bathroom, a balcony and the like, and positions of walls and doors in the home environment, can be acquired through the LDS or the visual sensor.
According to environment information acquired by an LDS or a visual sensor, an initial map corresponding to the home environment can be constructed, wherein the initial map comprises objects, walls and positions thereof in the home environment; further optionally, the initial map may be partitioned, and furthermore, semantic labeling may be performed on each partition to obtain an environment map including scenarized areas such as a living room, a main bed, a secondary bed, a kitchen, a toilet, a balcony, and the like. Then, the topographic information collected by the structured light module, such as the positions, heights and widths of the door-moving tracks and the push-pull bars, can be added to the positions of the corresponding door-moving tracks and the corresponding push-pull bars in the initial map, so as to obtain an environment map containing the topographic information. The environment map containing the terrain information is used as the basis for the home service robot to travel or navigate, so that the home service robot can accurately find a path from one area to another area, and the obstacle avoidance capability can be improved.
Fig. 7a is a flowchart of a method of traveling according to an exemplary embodiment of the present application, and as shown in fig. 7a, the method includes:
70. it is determined that travel to the first zone is required.
71. And planning a navigation path to the first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map.
72. Travel to the first area along a navigation path to the first area.
The autonomous mobile device of this embodiment may include or may not include a structured light module, which is not limited to this.
In this embodiment, the autonomous mobile device includes an environment map that includes not only position information of objects (obstacles), boundaries, passages, and the like in the working area of the autonomous mobile device, but also topographic information, such as a topographic position and height information thereof, in the working area. In this embodiment, the construction process of the environment map is not limited, and for example, the method provided in the foregoing embodiment shown in fig. 6 may be adopted for construction, but is not limited thereto.
In this embodiment, the autonomous mobile device needs to travel from the current location to the first area. The first area refers to any area other than where the autonomous mobile device is currently located. The first zone is specifically which zone of the environment the autonomous mobile device is located in, and may be determined in relation to the task that the autonomous mobile device needs to perform. For example, taking a home environment as an example, if the autonomous mobile device needs to go to a living room to perform a cleaning task, the first area is the living room in the home environment; if the autonomous mobile device needs to go to a kitchen to perform a cleaning task, the first area is the kitchen in the home environment. For another example, taking a warehousing environment as an example, if the autonomous mobile device needs to perform a transportation or inventory task to a third rack location, the first area is the area of the warehousing environment where the third rack is located.
In this embodiment, the autonomous mobile device may plan a navigation path from the current location to the first area according to the terrain location and the altitude information thereof recorded in the environment map in combination with the obstacle crossing height thereof, and travel to the first area along the navigation path to the first area.
As shown in fig. 7b, one embodiment of step 71 includes:
710. in conjunction with the recorded terrain position in the environment map, it is determined whether the terrain having the altitude information must be traversed from the current position to the first area. If yes, go to steps 711 and 712; if not, go to step 713.
711. Determining a target terrain which must pass through and other path areas, and selecting a passable area with height information lower than the obstacle crossing height on the target terrain according to the height information of the target terrain.
The traversable area on the target terrain is communicated 712 with other pathway areas to form a navigation pathway to the first area, and step 72 is entered.
713. A navigation path to the first area is planned in a conventional manner in conjunction with the environment map and proceeds to step 72.
In this embodiment, in combination with the recorded terrain position in the environment map and the position of the first area, it can be determined whether the autonomous mobile device has to traverse the terrain with altitude information; if it is not necessary to traverse terrain having altitude information, a navigation path to the first area may be planned in a conventional manner; if a terrain with height information, such as a threshold, a push bar or a sliding door track, has to be passed, the obstacle crossing height of the autonomous mobile device can be combined to judge whether an area with a height lower than the obstacle crossing height exists on the target terrain which has to be passed. If so, the autonomous mobile device can pass through the target terrain, and then a passing area lower than the obstacle crossing height is selected for the autonomous mobile device. Conversely, if there is no location on the target terrain below the obstacle crossing height that indicates that the autonomous mobile device is unable to traverse the target terrain and is unable to reach the first area, the task may be modified, such as abandoning the first area, or outputting an alert prompt to indicate to the user that the first area is not reachable. For a user, the autonomous mobile device may be manually moved to a first area; or, if the autonomous mobile device supports multiple obstacle crossing heights, the obstacle crossing height of the autonomous mobile device may also be manually adjusted so that the autonomous mobile device can pass through the target terrain.
Furthermore, the passable area on the target terrain is communicated with other path areas, a navigation path from the autonomous mobile equipment to the first area can be formed, and the autonomous mobile equipment can safely travel through the first area along the navigation path to realize the operation task.
In this embodiment, an environment map including topographic information is combined, so that navigation can be accurately performed on the autonomous mobile device, the risk that the autonomous mobile device is trapped can be reduced, and the work efficiency of the autonomous mobile device can be guaranteed. The embodiment of the application also provides a traveling method. The method of traveling is also applicable to autonomous mobile devices equipped with a structured light module comprising a camera module and a line laser transmitter. For the related description of the structured light module, reference may be made to the foregoing embodiments, which are not repeated herein. FIG. 8 is a flow chart of a method of travel provided by an exemplary embodiment of the present application, as shown in FIG. 8, including
80. In the process of advancing, the structured light module is used for collecting an environment image in the front area; wherein, the environment image comprises laser line segments. The laser line segment is formed after line laser emitted by the line laser emitter meets an object.
81. And identifying the terrain information in the front area based on the laser line segments in the environment image.
82. Travel control is performed based on the topographic information in the front area.
In this embodiment, the autonomous mobile device is provided with the structured light module, and in the process of moving forward, the structured light module can be used for collecting the terrain information in the front area, so as to perform traveling control based on the terrain information in the front area.
Wherein, the process that autonomic mobile device utilizes the structured light module to gather the topography information in the place ahead region at the in-process of marcing includes: in the advancing process, on one hand, a line laser transmitter in the structured light module is controlled to emit line laser outwards, and the line laser can be reflected back after encountering objects in the front area; on the one hand, the camera module in the control structure light module collects the environmental image in the front area. During the period, if the line laser detects an object in the front area, a laser line segment is formed on the surface of the object and is collected by the camera module; in other words, the environment image collected by the camera module contains a laser line segment formed after the line laser emitted by the line laser emitter meets the object.
Optionally, according to the laser line segment in the acquired environment image, the topographic information in the front region may be identified, including: calculating the position and the length of the laser line segment in the environment image; and calculating the height of the terrain position corresponding to the laser line segment according to the position and the length of the laser line segment in the environment image and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system.
The topographic information in this embodiment includes, but is not limited to: terrain position, terrain height, terrain width, and/or terrain contour within the forward region, etc. Since the line laser detection precision is very high, small changes of the terrain, such as small differences of the terrain in height, width, contour, etc., can be detected, and therefore the terrain information identified by the present embodiment has higher precision.
Further, the autonomous mobile device may perform travel control according to the topographic information in the front area.
For example, it may be determined whether there is a passable area in the front area where the height information is lower than the obstacle crossing height of the autonomous mobile device, according to the terrain information in the front area; if so, continuing to travel beyond the passable region; otherwise, the travel path of the autonomous mobile equipment is re-planned, or alarm prompt information is output to prompt the autonomous mobile equipment to be trapped.
Specifically, after the autonomous mobile device identifies the terrain information of the front area through the acquired environment image, by means of the advantage of higher precision of the terrain information, the position of the terrain higher than the obstacle crossing height of the autonomous mobile device and the position of the terrain lower than the obstacle crossing height of the autonomous mobile device in the terrain of the front area can be accurately judged, and the autonomous mobile device can select to continue to travel through the front area from the position of the terrain lower than the obstacle crossing height according to the difference of the terrain heights so as to ensure safer and more effective operation; if the terrain position lower than the obstacle crossing height of the autonomous mobile equipment does not exist in the terrain of the front area, the travelling path of the autonomous mobile equipment can be newly planned, the terrain information of other front areas can be identified, or alarm prompt information can be directly output to prompt the autonomous mobile equipment to be trapped. For example, when the autonomous mobile device passes through the same position, whether the terrain is lower than the obstacle crossing height of the autonomous mobile device can be judged according to the terrain height in the identified environment image, and if the terrain is lower than the obstacle crossing height of the autonomous mobile device, the autonomous mobile device can pass through the obstacle crossing height; conversely, it means that it is not passable here; under the condition of passing, the robot can continue to move through the position, so that the operation effect is improved; under the condition of no passing, the route can be planned again or alarm prompt information can be output to prompt the people to be trapped, so that the situation that the people are trapped due to forced passing can be avoided, and the risk of trapping can be reduced.
For convenience of understanding, the terrain recognition method provided by the embodiment of the present application is described in detail by taking an autonomous mobile device as an example of a home service robot, and combining a scenario in which the home service robot executes a task in a home environment.
Application scenario example 3:
the home service robot mainly works in a home environment, in the embodiment, the front side of the home service robot is provided with the structured light module, and the structured light module comprises a camera module and line laser transmitters distributed on two sides of the camera module. In connection with the home layout shown in fig. 5a, the working area of the home service robot may be a main bed, a living room, a sub bed, a kitchen, a toilet, a balcony, etc. As shown in fig. 5a, during the process that the home service robot travels from the living room to the kitchen, the home service robot passes through the track of the sliding door, and due to factors such as the flatness of the ground, the installation accuracy and the process, the terrain has a certain inclination, so that the height of some positions on the track of the sliding door is higher than the average height, and the height of some positions is lower than the average height. In this embodiment, the home service robot may acquire height information of the sliding door track by using the structured light module, and determine whether a passable area exists on the sliding door track, where the height of the passable area is lower than an obstacle crossing height of the home service robot; in the present embodiment, it is assumed that there is a rail height near the left portion as shown in fig. 5a that is lower than the obstacle crossing height of the home service robot, and the home service robot can enter the kitchen through the sliding door rail in the area, and perform the corresponding job task in the kitchen.
Fig. 9 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application. As shown in fig. 9, the autonomous mobile apparatus includes: the device comprises a device body 90, wherein one or more memories 91, one or more processors 92 and a structured light module 93 are arranged on the device body 90; the structured light module 93 includes: a camera module 931 and a line laser transmitter 932. In fig. 9, the line laser emitters 932 are illustrated as being distributed on both sides of the camera module 931, but the present invention is not limited thereto. For other implementation structures of the structured light module 93, reference may be made to the description in the foregoing embodiments, and further description is omitted here.
Wherein the one or more memories 91 are for storing computer programs; the one or more processors 92 are for executing computer programs for: in the process of moving of the autonomous mobile equipment, the structured light module 93 is used for collecting an environment image in a front area, wherein the environment image comprises a laser line segment formed after line laser emitted by the line laser emitter 932 meets an object; and identifying the terrain information in the front area based on the laser line segments in the environment image.
In an alternative embodiment, the one or more processors 92, when identifying topographical information in the forward region based on laser line segments in the environmental image, are specifically configured to: calculating the position and the length of the laser line segment in the environment image; and calculating the terrain position, the terrain height and/or the terrain contour corresponding to the laser line segment by combining the conversion relation between the coordinate system of the camera module and the world coordinate system according to the position and the length of the laser line segment in the environment image.
In an alternative embodiment, the one or more processors 92 are further configured to: controlling the line laser transmitters on two sides of the camera module to work simultaneously or alternatively in the process of moving the autonomous mobile equipment; under the simultaneous working mode, the linear laser emission directions of the linear laser emitters at two sides are intersected in the field of view of the camera module.
In an alternative embodiment, the one or more processors 92, when controlling the line laser emitters on both sides of the camera module to operate simultaneously or alternately, are further configured to: and controlling the line laser transmitters on two sides of the camera module to work simultaneously or alternatively according to a synchronous signal generated by each exposure of the camera module.
In an alternative embodiment, the one or more processors 92 are further configured to, after each acquisition of the environmental image, in an alternate operation of the line laser emitters on both sides of the camera module: and marking a laser source corresponding to the environment image, wherein the laser source is a line laser emitter positioned on the left side or the right side of the camera module.
In an alternative embodiment, the one or more processors 92, after identifying the topographical information within the forward area, are further configured to: determining a terrain position with a height lower than the obstacle crossing height of the autonomous mobile equipment in a front area according to the terrain height in the terrain information; the autonomous mobile device is directed to continue traveling from the topographical location through the forward area.
In an alternative embodiment, the one or more processors 92, after identifying the topographical information within the forward area, are further configured to: and adding the terrain height in the terrain information to the corresponding terrain position in the environment map.
In an alternative embodiment, in order to protect the structured light module 93 from being damaged by external force, a striking plate is further installed on the front side of the apparatus body 90, and the striking plate is located outside the structured light module 93. The area of the striking plate corresponding to the structured light module 93 is opened with a window to expose the camera module 931 and the line laser transmitter 932 in the structured light module. Further optionally, windows are respectively opened on the striking plate corresponding to the camera module 931 and the line laser transmitter 932.
Further, the autonomous mobile device of the present embodiment may include some basic components, such as a communication component 94, a power component 95, a driving component 96, and the like, in addition to the various components mentioned above.
Wherein the one or more memories are primarily for storing a computer program executable by the master controller to cause the master controller to control the autonomous mobile device to perform a corresponding task. In addition to storing computer programs, the one or more memories may be configured to store other various data to support operations on the autonomous mobile device. Examples of such data include instructions for any application or method operating on the autonomous mobile device, map data of the environment/scene in which the autonomous mobile device is located, operating modes, operating parameters, and so forth.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as Wifi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may further include a Near Field Communication (NFC) module, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and the like.
Alternatively, the drive assembly may include drive wheels, drive motors, universal wheels, and the like. Optionally, the autonomous mobile device of this embodiment may be implemented as a sweeping robot, and then under the condition of being implemented as a sweeping robot, the autonomous mobile device may further include a cleaning assembly, and the cleaning assembly may include a cleaning motor, a cleaning brush, a dusting brush, a dust collection fan, and the like. These basic components and the configurations of the basic components contained in different autonomous mobile devices are different, and the embodiments of the present application are only some examples.
According to the autonomous mobile equipment of the embodiment of the application, by means of the environment image that contains line laser emitter's structured light module can gather the operation region, by means of the very high advantage of line laser detection precision, can discern the topography information in the operation region accurately, be favorable to judging out the topography position that is higher than autonomous mobile equipment obstacle crossing height and the topography position that is lower than autonomous mobile equipment obstacle crossing height, can select the topography position that is lower than obstacle crossing height to continue the operation of marcing through the place ahead region, the security in the autonomous mobile equipment operation process has been promoted, and the obstacle crossing ability of autonomous mobile equipment has been improved, the risk that autonomous mobile equipment is stranded has been reduced.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: in the process of moving the autonomous mobile equipment, acquiring an environment image in a front area by using a structured light module, wherein the environment image comprises a laser line segment; identifying topographic information in a front area based on the laser line segments in the environment image; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
In addition to the above-described actions, when executed by one or more processors, the computer instructions may also cause the one or more processors to perform other actions, which may be described in detail in the method illustrated in fig. 4 a-4 d and will not be described again here.
The present application further provides an autonomous mobile apparatus, which has a structure similar to that of the autonomous mobile apparatus in the embodiment shown in fig. 9, and can be seen from fig. 9. The main difference between the autonomous mobile apparatus provided in this embodiment and the autonomous mobile apparatus shown in fig. 9 is that: the functions performed by the one or more processors executing the computer programs stored in memory vary. In the autonomous mobile device of this embodiment, the one or more processors execute computer programs stored in the one or more memories for:
in the process that the autonomous mobile equipment traverses the operation area, the structured light module is used for collecting an environment image in the operation area, wherein the environment image comprises a laser line segment formed after line laser emitted by a line laser emitter meets an object; identifying the terrain position and height information thereof in the operation area based on the laser line segment in the environment image; and constructing an environment map of the working area according to the terrain position and the height information of the terrain position in the working area.
In an alternative embodiment, the one or more processors 92, when identifying the terrain position and its altitude information within the work area based on the laser line segments in the environment image, are specifically configured to: calculating the position and the length of the laser line segment in the environment image; and calculating the terrain position and the height information corresponding to the laser line segment according to the position and the length of the laser line segment in the environment image and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system.
In an optional embodiment, the one or more processors, when constructing the environment map of the working area according to the terrain position and the altitude information thereof in the working area, are specifically configured to: acquiring an initial map corresponding to the operation area, wherein the initial map is constructed according to environmental information acquired by other sensors on the autonomous mobile equipment; and according to the terrain position in the working area, adding the height information of the terrain position in the initial map to obtain an environment map of the working area.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: in the process of traversing a working area, acquiring an environment image in the working area by using the structured light module, wherein the environment image comprises laser line segments; identifying the terrain position and height information thereof in the operation area based on the laser line segment in the environment image; constructing an environment map of the operation area according to the terrain position and the height information of the terrain position in the operation area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
In addition to the above-described actions, when executed by one or more processors, the computer instructions may also cause the one or more processors to perform other actions, which may be described in detail in the method illustrated in fig. 6 and will not be described again here.
Further embodiments of the present application also provide an autonomous mobile device, similar to the autonomous mobile device in the embodiment shown in fig. 9. One of the differences from the autonomous mobile device described in fig. 9 is that: the autonomous mobile device of this embodiment may or may not include a structured light module. In the case of including the structured light module, the structure of the autonomous mobile apparatus of the present embodiment can be seen from fig. 9. The two differences from the autonomous mobile device of fig. 9 are: the functions performed by the one or more processors executing the computer programs stored in memory vary. In the autonomous mobile device of this embodiment, the one or more processors execute computer programs stored in the one or more memories for:
determining that travel to a first area is required; planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map; travel to a first area along a navigation path to the first area.
In an optional embodiment, the one or more processors, when planning the navigation path to the first area, are specifically configured to: judging whether the terrain with height information needs to pass from the current position to the first area or not by combining the terrain position recorded in the environment map; if so, determining a target terrain and other path areas which need to pass through, and selecting a passing area with height information lower than the obstacle crossing height on the target terrain according to the height information of the target terrain; communicating the traversable regions over the target terrain with other pathway regions to form a navigation pathway to the first region.
Further, the one or more processors are further configured to: the navigation path to the first area may be planned in a conventional manner without having to traverse terrain having altitude information.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: determining that travel to a first area is required; planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map; travel to the first area along a navigation path to the first area.
In addition to the actions described above, the computer instructions, when executed by the one or more processors, may also cause the one or more processors to perform other actions, which may be described in detail in the method illustrated in fig. 7a and 7b and will not be described again here.
Still another embodiment of the present application provides an autonomous mobile apparatus, which has a structure similar to that of the autonomous mobile apparatus shown in fig. 9, and can be referred to as fig. 9. The difference from the autonomous mobile device of fig. 9 is that: the functions performed by the one or more processors executing the computer programs stored in memory vary. In the autonomous mobile device of this embodiment, the one or more processors execute computer programs stored in the one or more memories for:
in the advancing process, a structured light module is used for collecting an environment image in a front area, wherein the environment image comprises a laser line segment; identifying topographic information in a front area based on the laser line segments in the environment image; performing travel control based on topographic information in the forward area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
In an alternative embodiment, the one or more processors, when identifying the topographical information in the forward region based on the laser line segments in the environmental image, are specifically configured to: calculating the position and the length of the laser line segment in the environment image; and calculating the terrain position and height corresponding to the laser line segment according to the position and length of the laser line segment in the environment image and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system.
In an alternative embodiment, the one or more processors, when performing travel control based on terrain information in the forward region, are specifically configured to: judging whether a passable area with height information lower than the obstacle crossing height of the autonomous mobile equipment exists in the front area or not according to the terrain information in the front area; if present, travel continues beyond the passable region.
In an alternative embodiment, the one or more processors are further configured to: if not, replanning the traveling path of the autonomous mobile equipment, or outputting alarm prompt information to prompt the autonomous mobile equipment to be trapped.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: in the advancing process, a structured light module is used for collecting an environment image in a front area, wherein the environment image comprises a laser line segment; identifying topographic information in a front area based on the laser line segments in the environment image; continuing to travel based on the terrain information within the forward area; the laser line segment is formed by the line laser emitted by the line laser emitter after encountering an object.
In addition to the above-described actions, when the computer instructions are executed by one or more processors, the one or more processors may be further caused to perform other actions, which may be described in detail in the method shown in fig. 8 and will not be described again here.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (20)

1. A terrain identification method is suitable for autonomous mobile equipment, and is characterized in that the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps:
in the process of moving of the autonomous mobile equipment, acquiring an environment image in a front area by using the structured light module, wherein the environment image comprises a laser line segment;
identifying topographic information in the front area based on the laser line segments in the environment image;
the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
2. The method of claim 1, wherein identifying topographical information within the forward region based on laser line segments in the environmental image comprises:
calculating the position and the length of the laser line segment in the environment image;
and calculating the terrain position, the terrain height and/or the terrain contour corresponding to the laser line segment according to the position and the length of the laser line segment in the environment image and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system.
3. The method of claim 1, wherein the line laser emitters are distributed on both sides of the camera module, the method further comprising:
controlling the line laser transmitters on two sides of the camera module to work simultaneously or alternatively in the process of moving the autonomous mobile equipment;
under the simultaneous working mode, the linear laser emission directions of the linear laser emitters at two sides are intersected in the field of view of the camera module.
4. The method of claim 3, wherein controlling the line laser emitters on both sides of the camera module to operate simultaneously or alternately comprises:
and controlling the line laser transmitters on two sides of the camera module to work simultaneously or alternatively according to a synchronous signal generated by each exposure of the camera module.
5. The method of claim 4, wherein in the alternate mode of operation, each time an environmental image is acquired, the method further comprises:
marking a laser source corresponding to the environment image, wherein the laser source is a line laser emitter positioned on the left side or the right side of the camera module.
6. The method according to any one of claims 1-5, further comprising, after identifying the topographical information within the forward area:
determining a terrain position with a height in the front area lower than the obstacle crossing height of the autonomous mobile equipment according to the terrain height in the terrain information;
directing the autonomous mobile device to continue traveling from the geo-location through the forward area.
7. The method according to any one of claims 1-5, further comprising, after identifying the topographical information within the forward area:
and adding the terrain height in the terrain information to the corresponding terrain position in the environment map.
8. An environment map construction method is suitable for autonomous mobile equipment, and is characterized in that the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps:
in the process of traversing a working area, acquiring an environment image in the working area by using the structured light module, wherein the environment image comprises laser line segments;
identifying the terrain position and the height information thereof in the operation area based on the laser line segment in the environment image;
constructing an environment map of the operation area according to the terrain position and the height information of the terrain position in the operation area;
the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
9. The method of claim 8, wherein constructing an environment map of the work area from the terrain position and elevation information thereof within the work area comprises:
acquiring an initial map corresponding to the operation area, wherein the initial map is constructed according to environmental information acquired by other sensors on the autonomous mobile equipment;
and according to the terrain position in the working area, adding the height information of the terrain position in the initial map to obtain an environment map of the working area.
10. A method of travel for an autonomous mobile device, the method comprising:
determining that travel to a first area is required;
planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map;
travel to the first area along a navigation path to the first area.
11. The method of claim 10, wherein planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile device in combination with the terrain position and height information thereof recorded in the environment map comprises:
judging whether the terrain with height information needs to pass from the current position to the first area or not by combining the terrain position recorded in the environment map;
if so, determining a target terrain and other path areas which need to pass through, and selecting a passing area with height information on the target terrain lower than the obstacle crossing height according to the height information of the target terrain;
communicating the traversable regions over the target terrain with other pathway regions to form a navigation pathway to the first region.
12. A traveling method is suitable for autonomous mobile equipment, and is characterized in that the autonomous mobile equipment is provided with a structured light module, and the structured light module comprises a camera module and a line laser transmitter; the method comprises the following steps:
in the advancing process, acquiring an environment image in a front area by using the structured light module, wherein the environment image comprises laser line segments;
identifying topographic information within the forward region based on laser line segments in the environmental image;
performing travel control based on topographic information within the forward area; the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
13. The method of claim 12, wherein identifying topographical information within the forward region based on laser line segments in the environmental image comprises:
calculating the position and the length of the laser line segment in the environment image;
and calculating the terrain position and height corresponding to the laser line segment according to the position and length of the laser line segment in the environment image and by combining the conversion relation between the coordinate system of the camera module and the world coordinate system.
14. The method of claim 12, wherein performing travel control based on topographical information within the forward region comprises:
judging whether a passable area with height information lower than the obstacle crossing height of the autonomous mobile equipment exists in the front area or not according to the terrain information in the front area;
if present, proceed across the passable region.
15. The method of claim 14, further comprising:
if not, replanning the traveling path of the autonomous mobile equipment, or outputting alarm prompt information to prompt the autonomous mobile equipment to be trapped.
16. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter;
the one or more memories for storing a computer program; the one or more processors to execute the computer program to:
in the process of moving of the autonomous mobile equipment, acquiring an environment image in a front area by using the structured light module, wherein the environment image comprises a laser line segment;
identifying topographic information in the front area based on the laser line segments in the environment image;
the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
17. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter;
the one or more memories for storing a computer program; the one or more processors to execute the computer program to:
in the process that the autonomous mobile equipment traverses a working area, acquiring an environment image in the working area by using the structured light module, wherein the environment image comprises a laser line segment;
identifying the terrain position and the height information thereof in the operation area based on the laser line segment in the environment image;
constructing an environment map of the operation area according to the terrain position and the height information of the terrain position in the operation area;
the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
18. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter;
the one or more memories for storing a computer program; the one or more processors to execute the computer program to:
determining that travel to a first area is required;
planning a navigation path to a first area according to the obstacle crossing height of the autonomous mobile equipment and by combining the terrain position and the height information thereof recorded in the environment map;
travel to the first area along a navigation path to the first area.
19. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more memories, one or more processors and a structured light module are arranged on the device body; the structured light module includes: the camera module and the line laser transmitter;
the one or more memories for storing a computer program; the one or more processors to execute the computer program to:
in the advancing process, acquiring an environment image in a front area by using the structured light module, wherein the environment image comprises laser line segments;
identifying topographic information within the forward region based on laser line segments in the environmental image;
performing travel control based on topographic information within the forward area; the laser line segment is formed after line laser emitted by the line laser emitter meets an object.
20. A computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to perform the steps of the method of any one of claims 1-15.
CN201911403757.4A 2019-12-30 2019-12-30 Terrain recognition, traveling and map construction method, equipment and storage medium Pending CN111093019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403757.4A CN111093019A (en) 2019-12-30 2019-12-30 Terrain recognition, traveling and map construction method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403757.4A CN111093019A (en) 2019-12-30 2019-12-30 Terrain recognition, traveling and map construction method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111093019A true CN111093019A (en) 2020-05-01

Family

ID=70398170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403757.4A Pending CN111093019A (en) 2019-12-30 2019-12-30 Terrain recognition, traveling and map construction method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111093019A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708360A (en) * 2020-05-15 2020-09-25 科沃斯机器人股份有限公司 Information acquisition method, device and storage medium
CN111743464A (en) * 2020-07-06 2020-10-09 追创科技(苏州)有限公司 Obstacle avoidance method and device based on line laser
CN112000093A (en) * 2020-07-15 2020-11-27 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN112099504A (en) * 2020-09-16 2020-12-18 深圳优地科技有限公司 Robot moving method, device, equipment and storage medium
CN112596654A (en) * 2020-12-25 2021-04-02 珠海格力电器股份有限公司 Data processing method, data processing device, electronic equipment control method, device, equipment and electronic equipment
WO2021135392A1 (en) * 2019-12-30 2021-07-08 科沃斯机器人股份有限公司 Structured light module and autonomous moving apparatus
CN113221635A (en) * 2021-03-29 2021-08-06 追创科技(苏州)有限公司 Structured light module and autonomous mobile device
CN113240737A (en) * 2021-04-20 2021-08-10 云鲸智能(深圳)有限公司 Threshold identification method and device, electronic equipment and computer readable storage medium
CN113670276A (en) * 2021-08-17 2021-11-19 山东中图软件技术有限公司 Underground passage mapping method, underground passage mapping equipment and underground passage storage medium based on UWB
CN113907644A (en) * 2020-07-08 2022-01-11 原相科技股份有限公司 Automatic sweeper, control method of automatic sweeper and sweeping robot
WO2022188364A1 (en) * 2021-03-08 2022-09-15 北京石头世纪科技股份有限公司 Line laser module and self-moving device
WO2022188366A1 (en) * 2021-03-08 2022-09-15 北京石头世纪科技股份有限公司 Line laser module and self-moving device
WO2023124788A1 (en) * 2021-12-28 2023-07-06 速感科技(北京)有限公司 Autonomous mobile device, control method therefor, apparatus, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155066A (en) * 2016-09-29 2016-11-23 翁锦祥 A kind of mover carrying out road barrier detection and method for carrying
CN106200652A (en) * 2016-09-29 2016-12-07 翁锦祥 A kind of intelligent material conveying system and method for carrying
CN107819268A (en) * 2017-11-01 2018-03-20 中国科学院长春光学精密机械与物理研究所 The control method and device of laser power in 3 D scanning system
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN110393482A (en) * 2019-09-03 2019-11-01 深圳飞科机器人有限公司 Maps processing method and clean robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155066A (en) * 2016-09-29 2016-11-23 翁锦祥 A kind of mover carrying out road barrier detection and method for carrying
CN106200652A (en) * 2016-09-29 2016-12-07 翁锦祥 A kind of intelligent material conveying system and method for carrying
CN107819268A (en) * 2017-11-01 2018-03-20 中国科学院长春光学精密机械与物理研究所 The control method and device of laser power in 3 D scanning system
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN110393482A (en) * 2019-09-03 2019-11-01 深圳飞科机器人有限公司 Maps processing method and clean robot

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135392A1 (en) * 2019-12-30 2021-07-08 科沃斯机器人股份有限公司 Structured light module and autonomous moving apparatus
CN111708360A (en) * 2020-05-15 2020-09-25 科沃斯机器人股份有限公司 Information acquisition method, device and storage medium
CN111708360B (en) * 2020-05-15 2023-01-31 科沃斯机器人股份有限公司 Information acquisition method, device and storage medium
WO2022007591A1 (en) * 2020-07-06 2022-01-13 追觅创新科技(苏州)有限公司 Linear laser beam-based method and device for obstacle avoidance
CN111743464A (en) * 2020-07-06 2020-10-09 追创科技(苏州)有限公司 Obstacle avoidance method and device based on line laser
EP4154788A4 (en) * 2020-07-06 2024-05-15 Dreame Innovation Technology (Suzhou) Co., Ltd. Linear laser beam-based method and device for obstacle avoidance
US11690490B2 (en) 2020-07-08 2023-07-04 Pixart Imaging Inc. Auto clean machine and auto clean machine control method
CN113907644A (en) * 2020-07-08 2022-01-11 原相科技股份有限公司 Automatic sweeper, control method of automatic sweeper and sweeping robot
CN112000093A (en) * 2020-07-15 2020-11-27 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN112000093B (en) * 2020-07-15 2021-03-05 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN112099504A (en) * 2020-09-16 2020-12-18 深圳优地科技有限公司 Robot moving method, device, equipment and storage medium
CN112596654A (en) * 2020-12-25 2021-04-02 珠海格力电器股份有限公司 Data processing method, data processing device, electronic equipment control method, device, equipment and electronic equipment
CN112596654B (en) * 2020-12-25 2022-05-17 珠海格力电器股份有限公司 Data processing method, data processing device, electronic equipment control method, device, equipment and electronic equipment
WO2022188364A1 (en) * 2021-03-08 2022-09-15 北京石头世纪科技股份有限公司 Line laser module and self-moving device
WO2022188366A1 (en) * 2021-03-08 2022-09-15 北京石头世纪科技股份有限公司 Line laser module and self-moving device
US11940806B2 (en) 2021-03-08 2024-03-26 Beijing Roborock Technology Co., Ltd. Line laser module and autonomous mobile device
WO2022205810A1 (en) * 2021-03-29 2022-10-06 追觅创新科技(苏州)有限公司 Structured light module and autonomous moving device
CN113221635A (en) * 2021-03-29 2021-08-06 追创科技(苏州)有限公司 Structured light module and autonomous mobile device
CN113240737A (en) * 2021-04-20 2021-08-10 云鲸智能(深圳)有限公司 Threshold identification method and device, electronic equipment and computer readable storage medium
CN113240737B (en) * 2021-04-20 2023-08-08 云鲸智能(深圳)有限公司 Method, device, electronic equipment and computer readable storage medium for identifying threshold
CN113670276A (en) * 2021-08-17 2021-11-19 山东中图软件技术有限公司 Underground passage mapping method, underground passage mapping equipment and underground passage storage medium based on UWB
CN113670276B (en) * 2021-08-17 2024-04-12 山东中图软件技术有限公司 UWB-based underground passage mapping method, device and storage medium
WO2023124788A1 (en) * 2021-12-28 2023-07-06 速感科技(北京)有限公司 Autonomous mobile device, control method therefor, apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN111093019A (en) Terrain recognition, traveling and map construction method, equipment and storage medium
CN111142526B (en) Obstacle crossing and operation method, equipment and storage medium
CN111090277B (en) Method, apparatus and storage medium for travel control
CN111123278B (en) Partitioning method, partitioning equipment and storage medium
US10328573B2 (en) Robotic platform with teach-repeat mode
US11407116B2 (en) Robot and operation method therefor
JP6772129B2 (en) Systems and methods for the use of optical mileage sensors in mobile robots
US20180361583A1 (en) Robotic platform with area cleaning mode
US8090193B2 (en) Mobile robot
CN110960138A (en) Structured light module and autonomous mobile device
US20180361584A1 (en) Robotic platform with long-term learning
KR20200029970A (en) A robot cleaner and a controlling method for the same
CN111708360B (en) Information acquisition method, device and storage medium
CN111083332B (en) Structured light module, autonomous mobile device and light source distinguishing method
CN110974083A (en) Structured light module and autonomous mobile device
CN110652256A (en) Mobile robot and control method
WO2020035902A1 (en) Mobile robot
CN106774295B (en) Distributed autonomous charging system for guided robot
US20140098218A1 (en) Moving control device and autonomous mobile platform with the same
US20230123512A1 (en) Robotic cleaning device with dynamic area coverage
JP2017021570A (en) Mobile robot
TWI759760B (en) Robot cleaner and method for controlling the same
CN212521620U (en) Structured light module and autonomous mobile device
CN116661458A (en) Robot travel control method, robot, and storage medium
KR20180038884A (en) Airport robot, and method for operating server connected thereto

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200501