CN110554696B - Robot system, robot and robot navigation method based on laser radar - Google Patents

Robot system, robot and robot navigation method based on laser radar Download PDF

Info

Publication number
CN110554696B
CN110554696B CN201910749492.7A CN201910749492A CN110554696B CN 110554696 B CN110554696 B CN 110554696B CN 201910749492 A CN201910749492 A CN 201910749492A CN 110554696 B CN110554696 B CN 110554696B
Authority
CN
China
Prior art keywords
robot
distance
point cloud
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910749492.7A
Other languages
Chinese (zh)
Other versions
CN110554696A (en
Inventor
舒忠艳
张国栋
叶力荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Group Co Ltd filed Critical Shenzhen Silver Star Intelligent Group Co Ltd
Priority to CN201910749492.7A priority Critical patent/CN110554696B/en
Publication of CN110554696A publication Critical patent/CN110554696A/en
Application granted granted Critical
Publication of CN110554696B publication Critical patent/CN110554696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A robot system, a robot and a robot navigation method based on laser radar. In the embodiment of the invention, the robot is navigated by acquiring the information of the obstacles in the environment where the robot is located, which is detected by the laser radar installed on the robot, according to the image, obtaining the distance between the obstacles and judging the first position where the robot can pass or the second position where the robot cannot pass, so that the probability of trapping the robot is reduced or the robot is navigated to get rid of the trapping quickly after the robot is trapped, and the working efficiency of the robot is improved.

Description

Robot system, robot and robot navigation method based on laser radar
Technical Field
The invention relates to the field of intelligent robots, in particular to a robot system, a robot and a robot navigation method based on laser radar.
Background
With the development of artificial intelligence technology, the living standard of people is improved, and the application of the robot is more and more extensive, so that the robot is widely applied to industries such as industry, agriculture, medical treatment, service, intelligent home and the like.
In the prior art, robots mostly adopt a random walking mode, the environment is detected according to collision sensors installed on the robots, when the collision sensors are triggered, the robots turn to, or the robots build a map to walk in preset zigzag modes, field-shaped modes and the like, and the robots turn to after the collision sensors are started, so that the robots cannot effectively judge which local robots can walk in the environment and which local robots can not normally walk, the working efficiency of the robots is low, and the robots are easily trapped by objects in the working environment.
Disclosure of Invention
The invention aims to solve the technical problems that a robot system, a robot and a robot navigation method based on laser radar are provided to solve the problems that the existing robot is low in working efficiency and the robot is easily trapped by objects in a working environment.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
in a first aspect, the present invention provides a robot navigation method based on a laser radar, including:
determining that the robot is trapped;
acquiring an image frame acquired by a laser radar when the robot is trapped;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking one or more first positions through which the robot can pass according to a first distance between adjacent obstacles;
and traversing the first positions in sequence.
In one embodiment of the invention, marking one or more first positions through which the robot can pass according to a first distance between adjacent obstacles comprises:
determining a minimum first distance between adjacent obstacles;
storing the position where the minimum first distance is greater than the diameter of the robot body as a first position.
In one embodiment of the present invention, said sequentially traversing said first locations comprises:
and sequencing according to the size of the first distance of the first position, and traversing from large to small according to the first distance in sequence.
In one embodiment of the present invention, said sequentially traversing said first locations comprises:
the distance from the robot to the first position is a second distance;
and sequencing according to the size of the second distance, and traversing from small to large according to the second distance in sequence.
In one embodiment of the present invention, the determining that the robot is trapped comprises:
in the working process of the robot, an environment map is established;
and when the working time of the robot in a certain area on the environment map reaches a first preset time, determining that the robot is trapped.
In an embodiment of the invention, the navigation method further includes setting a certain area on the environment map as an exclusion zone, and the robot avoids the exclusion zone in a subsequent operation process.
In one embodiment of the invention, the determining that the robot is trapped comprises:
the robot is preset with a collision sensor, and in the working process of the robot, the collision sensor is triggered for a certain number of times within a certain time, and then the robot is determined to be trapped.
In one embodiment of the present invention, the lidar based robot navigation method further includes:
and sequentially traversing the first position and then judging whether the cleaning robot is successfully released, if so, recovering the cleaning robot to be in a normal working state, otherwise, stopping the cleaning robot and sending alarm information.
In a second aspect, the present invention provides a robot comprising:
a robot main body;
a drive mechanism configured to drive the robot to move over a ground surface;
a laser radar configured to detect obstacle information around the robot; and
a control module configured to perform:
acquiring an image frame acquired by a laser radar when the robot is trapped;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking one or more first positions through which the robot can pass according to a first distance between adjacent obstacles;
and traversing the first positions in sequence.
In one embodiment of the invention, marking one or more first locations through which the robot can pass based on a first distance between adjacent obstacles comprises:
determining a minimum first distance between adjacent obstacles;
storing the position where the minimum first distance is greater than the diameter of the robot body as a first position.
In a third aspect, the present invention provides another robot comprising:
a robot main body;
a drive mechanism configured to drive the robot to move over a ground surface;
a laser radar configured to detect obstacle information around the robot; and
a control module configured to perform:
acquiring an image frame acquired by a laser radar of the robot;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking a first position where the robot can pass and a second position where the robot cannot pass according to a first distance between adjacent obstacles;
and marking a second position as a restriction characteristic, and controlling the robot to execute avoidance operation according to the second position information by the control module.
In one embodiment of the invention, the first position is a position where a minimum first distance between adjacent obstacles is greater than a diameter of the robot body, and the second position is a position where the minimum first distance between adjacent obstacles is no greater than the diameter of the robot body.
In a fourth aspect, the present invention provides a robotic system comprising:
a self-moving robot configured to:
acquiring an image frame acquired by a laser radar of the robot;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
identifying a first position through which the robot can pass and a second position through which the robot cannot pass according to a first distance between adjacent obstacles; and
a mobile terminal configured to add a line segment at the second position, the line representing restriction information, transmit the restriction information to the robot, and the robot performs avoidance operation according to the restriction information.
Compared with the prior art, the technical scheme of the embodiment of the invention at least has the following beneficial effects:
in the embodiment of the invention, the robot is navigated by acquiring the information of obstacles around the robot, which is detected by the laser radar installed on the robot, according to the image, obtaining the distance between the obstacles and judging the first position where the robot can pass or the second position where the robot cannot pass, so that the probability of trapping the robot is reduced or the robot is navigated to get rid of the trapping quickly after the robot is trapped, and the working efficiency of the robot is improved.
Drawings
In order to more clearly illustrate the embodiments or prior art solutions of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the description are only some embodiments of the present invention, and other modifications can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a perspective view of a robot embodying the present invention;
FIG. 2 is a bottom view of the robot in one embodiment of the present invention;
FIG. 3 is a flow chart of method steps performed by the control module;
FIG. 4 is a block diagram of a condition for determining that a robot is trapped in an embodiment of the invention;
FIG. 5 is a schematic diagram of a robot determining trapped in an embodiment of the present invention;
FIG. 6 is a diagram of the steps for acquiring lidar data in an embodiment of the present invention;
FIG. 7 is a flow chart of image recognition according to an embodiment of the present invention;
FIG. 8 is a schematic illustration of determining a minimum first distance in an embodiment of the present invention;
FIG. 9 is a flow chart of determining a first position in an embodiment of the present invention;
FIG. 10 is a diagram illustrating finding a minimum distance between two straight lines according to an embodiment of the present invention;
FIG. 11 is a schematic illustration of determining a minimum first distance in another embodiment of the present invention;
FIG. 12 is a block diagram of a robot traversing a first location in accordance with an embodiment of the present invention;
FIG. 13 is a schematic diagram of a robot for escaping from a trouble in accordance with an embodiment of the present invention;
FIG. 14 is a flowchart illustrating an embodiment of the present invention for a robot to determine whether to successfully escape from a stranded space;
FIG. 15 is a block diagram of a robot in another embodiment of the present invention;
FIG. 16 is a flow chart of method steps performed by the control module in accordance with another embodiment of the present invention;
FIG. 17 is a block diagram of a robotic system in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "front", "back", "left" and "right" in the present text refer to the forward direction of the robot, and the terms "top", "bottom", "up", "down", "horizontal" and "vertical" in the present text refer to the normal working state of the robot.
The invention patent is exemplified by a robot mainly used for cleaning a ground home environment, and in other embodiments, the robot may be other robots, such as: a lawn robot, a service robot, or a robot for cleaning environments such as a restaurant, a station, and an airport.
Referring to fig. 1 and 2, fig. 1 is a perspective view of a robot according to an embodiment of the present invention, and fig. 2 is a bottom view of the robot according to an embodiment of the present invention. The robot 110 according to the present invention includes: a main body 10, a driving mechanism 20 for driving the robot 110 to move on the ground, a laser radar 30 for detecting obstacle information of an environment where the robot 110 is located, and a control module 40. The body 10 is generally circular in shape in this embodiment, and in other embodiments, the body 10 may be generally oval, triangular, D-shaped, or other shapes in shape. The driving mechanism 20 includes left and right driving wheels 21, the left and right driving wheels 21 are installed at left and right sides of a bottom of the main body 10, the bottom being a surface of the main body 10 facing the ground, and the driving mechanism 20 is configured to carry the robot 110 and drive the robot 110 to move on the ground. The driving mechanism 20 may further include an omni wheel 22, the omni wheel 22 is mounted at a position near the bottom of the main body 10, and the omni wheel 22 is a movable caster capable of rotating 360 degrees horizontally, so that the robot 110 can steer flexibly. The omni-directional wheel 22 may also be installed at a position near the rear of the bottom of the main body 10, and the left and right driving wheels 21 and the omni-directional wheel 22 are installed to form a triangle, so as to improve the walking smoothness of the robot 110.
The control module 40 is installed in the main body 10, and the control module 40 may include a plurality of components, which control the respective components, or may be provided with only one component, which controls all the components. For example: the control 50 may include a main control module provided to the main body 10, a driving part control module sensing speed information of the driving mechanism 20 and controlling the driving mechanism 20 to adjust the operation of the robot 110, a laser radar control module controlling the operation of the laser radar 30, and the like. The control module of each component transmits respective information to the main control module, and the main control module processes the information according to each component and respectively feeds corresponding control instructions back to each component. All the components take the main control module as a center, communicate with each other and transmit signals. The control module 40 may be provided with only one, and is electrically connected to each of the other components to control the operation of the other components. The control module 40 may be a micro control unit such as a single chip, an FPGA, an ASIC, or a DSP.
Conceivably, when the robot 110 is a robot 110 for cleaning a domestic environment on a floor, the robot 110 further includes a cleaning assembly 50, the cleaning assembly 50 includes a rolling member 52, the rolling member 52 is transversely installed on the main body 10 and rotates around an axis substantially perpendicular to an advancing direction of the robot 110 to clean the floor, a suction port is further formed at a middle position where the rolling member 52 is installed, and the robot 110 further includes a storage box 70, and the storage box 70 communicates with the rolling member 52 through the suction port to collect dirt cleaned by the rolling member 52 to the suction port. The rolling member 52 may be a brush, a rubber brush or a brush integrated with rubber, and cleans the ground, and at this time, the storage box 70 mainly stores garbage, dust, debris, and the like. The rolling elements 52 may also be rollers, which are covered by a soft elastic layer to mop the floor, and the storage box 70 collects mainly the dirt cleaned by the rolling elements 52. The cleaning assembly 50 may further include an edge brush 51, the edge brush 51 being rotatable about an axis substantially perpendicular to the floor, the edge brush 51 having a plurality of elongated bristles spaced about the axis, the elongated bristles extending outwardly beyond the contour of the main body 10 for sweeping debris on the floor beyond the contour of the main body 10 to the bottom of the main body 10. The robot 110 may further include a blower fan that sucks the contaminants cleaned by the cleaning assembly 50 into the storage cassette 70.
The robot 110 may further include a collision detection assembly 60, the collision detection assembly 60 is installed at a periphery of the robot 110, when the robot 110 collides with an external object, the collision detection assembly 60 is triggered, and the control module 40 may control the robot 110 to walk according to a signal of the collision detection assembly 60.
Laser radar 30, install in the top of robot 110, laser radar 30 includes laser emitter, laser receiver, driving motor and angle detecting element, driving motor drive laser radar 30 is high-speed rotatory, laser emitter transmission laser beam is by behind the barrier reflection in the environment laser receiver receives, works as laser radar 30 rotates three hundred sixty degrees, then outputs a frame of image information, and image information comprises a plurality of point cloud data.
In order to detect the environment of the robot 110 by the laser radar 30 and obtain the distance between adjacent obstacles, the robot 110 can pass through the first position, and the navigation robot 110 is quickly out of the way after the robot 110 is trapped, thereby improving the working efficiency of the robot 110. Referring to fig. 3, fig. 3 is a flow chart of method steps performed by the control module.
S10, it is determined that the robot 110 is trapped.
When the robot 110 performs a cleaning task in a home environment, it often gets under a table, a stool, or an object when a child plays, causing the robot 110 to get stuck. In an embodiment of the present invention, please refer to fig. 4, where fig. 4 is a block diagram illustrating a condition for determining that a robot is trapped in an embodiment of the present invention, there are two ways to determine that the robot 110 is trapped:
in a first mode, S101, the robot 110 works in a certain area on the environment map for a preset time. In the normal working process of the robot 110, an environment map is established to locate the position of the robot 110, and when the robot 110 repeatedly works in a certain area on the map for a long time, it can be determined that the robot 110 is trapped. The area may be a predetermined area threshold, and the cleaning coverage of the robot 110 is defined as the ratio of the area of the cleaned area to the total area, and the area of repeated cleaning is not counted again in the cleaned area, and is expressed as a percentage. If the cleaning coverage of the robot 110 at the previous time is a, the cleaning coverage at the current time is still a, and the time length from the previous time to the current time reaches the preset time, it is determined that the robot 110 is trapped. Coverage a may be a specific value, such as sixty percent; the coverage a may also be a range of values, such as sixty percent to sixty-one percent. In other embodiments, the certain area may also be determined directly by the position of the map where the robot 110 is located, please refer to fig. 5, where fig. 5 is a schematic diagram of the robot determining the trapped object in an embodiment of the present invention. When the working time of the robot 110 reaches the preset time, the circumscribed circle is virtualized by the active area B on the map where the robot 110 is located, the radius r of the circumscribed circle is calculated, and if the radius r is within the preset radius, it is determined that the robot 110 is trapped. In other embodiments, the laser radar-based robot navigation method further includes setting a certain area on the environment map as an exclusion zone, and the robot avoids the exclusion zone in a subsequent operation process. For example, in the trapped area B shown in fig. 5, which has only one trapped place, the robot is difficult to trap itself, and the area B on the environment map may be set as an exclusion zone, so that the robot avoids the exclusion zone in the subsequent work process, thereby reducing the probability of trapping the robot.
In a second mode, in the working process of the robot 110 in S102, if the collision sensor is triggered for a certain number of times within a certain time period, it is determined that the robot 110 is trapped. The collision detection assembly 60 is installed on the periphery of the robot 110, when the robot 110 enters the bottom of the stool and the stool feet are more, the collision detection assembly is very easy to trigger in the rotation process of the robot 110, a preset time period can be set, and if the number of times that the collision detection assembly 60 is triggered within the preset time period of the robot 110 reaches a collision threshold value, it is determined that the robot 110 is trapped. After determining that the robot 110 is trapped, the process proceeds to step S20.
S20 acquires an image frame acquired by the lidar when the robot 110 is trapped. Referring specifically to fig. 6, fig. 6 is a diagram illustrating steps for acquiring lidar data in accordance with an embodiment of the present invention.
After the robot 110 is trapped, the robot 110 stops walking or stops performing the cleaning task while stopping walking, for example, the cleaning assembly 50 and the blower are also turned off while stopping walking, so as to reduce the power consumption of the robot 110 and prevent the robot 110 from dead halt due to the power consumption in the trapped place. Step S21 is executed to acquire an image frame, that is, a circle of the surrounding environment of the lidar scanning robot 110, store the laser point cloud data, and output a frame of image. The robot 110 stops walking, collects the data of the laser radar, reduces distortion of the laser point cloud data, and improves the positioning accuracy of the obstacle detected by the laser radar data. In other embodiments, the robot 110 may not stop walking, and at this time, the point cloud data correction in step S22 needs to be performed, and during the process of acquiring one frame of laser data, the robot 110 has moved for a certain track, which may cause distortion of one frame of data, and as the distortion increases, the positioning accuracy of the robot 110 may be affected. The detection of the obstacle in the environment where the robot 110 is located needs to be further converted according to the position of the robot 110, so that the detection accuracy of the obstacle is also affected, the finding of the position where the robot 110 gets out of position needs to be positioned and navigated with high accuracy to enable the robot 110 to get out of position faster, and the distortion of the laser point cloud data caused by the movement of the robot 110 must be considered. Knowing the motion of the robot 110, the point cloud data can be corrected by knowing the motion, based on the degree of distortion being proportional to the speed of the motion of the robot 110 and inversely proportional to the frequency of the lidar. The point cloud data can be corrected by a continuous time trajectory model, or by predicting the motion of the robot 110, by other sensors, such as odometers, IMUs, cameras, laser radars, etc., or by establishing a constant-speed motion model for motion prediction, and then correcting the point cloud. Because the motion of the robot 110 is generally low speed, the calibration using the constant speed motion model is simple and practical.
And S23, sampling point cloud data. Because each frame of laser radar has a large amount of point cloud data, the large amount of point cloud data is not all accurate and reliable, and the timeliness of the algorithm is seriously affected by the large amount of point cloud data, the point cloud data needs to be sampled, namely, part of point cloud is properly filtered, and proper point cloud is selected. Because the reflection intensity value of specific point in the point cloud data that laser radar formed is generally in inverse proportion to the square of the distance from the point to the laser radar, can predetermine first threshold, select reliable point cloud through whether the product of reflection intensity value and the square of the distance from the point to the laser radar is greater than first threshold, if greater than, then the point cloud is reliable, if not greater than, then the point cloud is unreliable, filters this point cloud. The output frame of image is formed by reliable point cloud data, the collection characteristic information of the original point cloud is reserved, the number of points is reduced, and the complexity of later-period calculation can be reduced. The process advances to step S30.
S30, the image is recognized, and a first distance between adjacent obstacles is obtained according to the obstacle information in the recognized image. Referring specifically to fig. 7, fig. 7 is a flow chart of image recognition according to an embodiment of the invention. And S31, point cloud data are segmented, and obstacle information in the image is identified. For each frame of point cloud data, the point cloud is first divided into different point cloud blocks, and each point cloud block represents an obstacle. And presetting a second threshold, and if the distance between two continuous point clouds is smaller than the second threshold, the two point clouds belong to the same point cloud block. If the distance between two continuous point clouds is larger than a second threshold value, a frame of point cloud data is segmented from the point cloud data. And dividing a frame of point cloud data into several point cloud blocks, wherein each point cloud block comprises a plurality of point clouds. Because the distribution of obstacles in the environment where the robot 110 is located is not uniform, the density of the point clouds on the image frames output after the lidar scans for one circle is different, and in general, the density of the point clouds near the robot 110 is higher, and the density of the point clouds far away from the robot 110 is lower. When point cloud data segmentation is performed, an adaptive variable second threshold segmentation method is applied, for example, when the distance from a certain point cloud to the robot 110 is d, the segmented second threshold is selected to be d, and when the distance from the point cloud to the robot 110 is 2d, the second threshold is selected to be 2d. Alternatively, other linear or non-linear functions may be used to define the adaptive segmentation threshold. In short, different second threshold values are selected for point cloud data segmentation in point clouds having different distances from the robot 110, so that the segmented point cloud blocks are more fit to the obstacle features in the environment where the robot 110 is located. The distance from the point cloud to the robot 110 is generally the distance from the point cloud to the center of a lidar mounted on the robot 110, and the lidar may be mounted at a forward position or a rearward position or an intermediate position on the top of the robot 110. If the lidar is installed at the middle position of the robot 110, the robot 110 is circular, and the center of the lidar is the center of the robot 110. The process proceeds to step S32 and step S33.
Step S32 obstacle feature extraction, and step S33 calculates a first distance between adjacent obstacles. In the point cloud data segmentation in step S31, the points of the point cloud data segmented into a plurality of point cloud blocks are tearing points, and the point cloud data is fitted into a straight line, where the method for fitting the straight line may be a least square method, a weighted least square method, or the like. Referring to fig. 8, fig. 8 is a schematic diagram illustrating determination of a minimum first distance according to an embodiment of the present invention, where adjacent obstacles are a first obstacle and a second obstacle, the first obstacle and the second obstacle are characterized by a straight line 1 and a straight line 2 fitted by point cloud data, the straight line 1 has a plurality of point clouds M1 to M8, the straight line 2 has a plurality of point clouds N1 to N5, and first distances from M1 to M8 to N1 to N5 are sequentially calculated and stored.
S40 marks one or more first positions through which the robot 110 can pass according to a first distance between adjacent obstacles. Referring to fig. 9, fig. 9 is a flowchart illustrating a process of determining a first location according to an embodiment of the invention. Step S41 determines the minimum distance between adjacent obstacles, and the minimum first distance can be obtained by comparing the stored first distances. In other embodiments, the method for calculating the minimum distance between the obstacles may further determine whether there is an angular point on the fitted straight line according to the fitted straight line, and if there is no angular point, the fitted straight line is considered to be a straight line, and the first distance between the two straight lines is found. Referring to fig. 10 and 11, fig. 10 is a schematic diagram illustrating finding a minimum distance between two straight lines according to an embodiment of the present invention, and fig. 11 is a schematic diagram illustrating finding a determined first distance according to an embodiment of the present invention.
The middle point of the straight line 1 is M ', the middle point of the straight line 2 is N ', the distance between the calculated point M ' and the point N ' is d1, two points on two sides near the point N ' are a point N2 and a point N4 respectively, the distance between the calculated point M ' and the point N2 is d2, the distance between the point M ' and the point N4 is d3, the sizes of d2 and d3 are compared, and d2 shown in FIG. 9 is smaller than d3. And sequentially calculating the distances from the point M 'to the points in the direction of the point N2 until the distance d4 from the point M' to the end point N1 of the straight line 2 is calculated, wherein N1 is the end point for obtaining the minimum distance. Taking the endpoint N1 as a ranging starting point, obtaining the endpoint M1 with the shortest distance from the obstacle 1 to the obstacle 2 according to the method, and calculating the distance between N1 and M1 as d5, wherein d5 is the minimum first distance between the adjacent first obstacle and the second obstacle. If the fitted straight line is further judged to have angular points, the distances from the points on the straight line 1 to all the points on the straight line 2 are calculated in sequence according to the method, and the minimum first distance is found out by comparison in sequence.
S42 stores the position where the minimum first distance is greater than the diameter of the body of the robot 110 as the first position.
The position through which the robot 110 can pass must have a distance between adjacent obstacles greater than the diameter of the body of the robot 110. Diameter information of the body of the robot 110 is previously stored, and a minimum first distance between adjacent obstacles has been determined in the foregoing steps, the minimum distance between adjacent obstacles is compared with the diameter of the body of the robot 110, and a position where the minimum distance between adjacent obstacles is greater than the diameter of the body of the robot 110 is stored as a first position through which the robot 110 can pass.
S50 traversing the first position, referring to fig. 12, fig. 12 is a block diagram of the robot traversing the first position in an embodiment of the present invention. In order to enable the robot 110 to get rid of the trouble as soon as possible or determine whether the robot 110 can get rid of the trouble by itself, there are two ways for the robot 110 to traverse the first position, please refer to fig. 13, in which fig. 13 is a schematic diagram of the robot according to an embodiment of the present invention. The straight lines in the figure represent obstacles in the environment of the robot 110, the obstacle information having been processed by the image frames acquired by the lidar and then being represented by the first positions X1, X2 and X3 through which the robot 110 may pass.
In a first manner, S501 sorts according to the minimum first distance of the first position, and traverses from large to small according to the first distance in sequence. The minimum first distance of the first position is sorted from large to small as X1, X3, and X2, the position traversed by the robot 110 is the position X1 that is visited first, if X1 cannot get rid of the trouble, X3 is visited, and if X3 cannot get rid of the trouble, X2 is visited again. The traversal is performed with the most likely positions to get rid of the trouble as the priority to reduce the time for the robot 110 to get rid of the trouble.
In a second mode, the distance from the robot 110 to the first position is a second distance in S502; and sequencing according to the size of the second distance, and traversing from small to large according to the second distance in sequence. The second distances from the robot 110 to the first positions X1, X2, and X3 that can get rid of the trouble are dx1, dx2, and dx3, respectively, and the distances are sorted from small to large into dx2, dx1, and dx3, where the position traversed by the robot 110 is X2, if the position X2 cannot get rid of the trouble, X1 is reached, and if the position X1 cannot get rid of the trouble, X3 is reached. The traversal is performed with the position of possible getting rid of the trouble closest to the robot 110 as the priority to reduce the time for the robot 110 to get rid of the trouble.
Referring to fig. 14, fig. 14 is a flowchart illustrating a robot determining whether to successfully escape from a stranded space according to an embodiment of the present invention. S60 whether the robot 110 successfully gets rid of the trouble in the process of traversing the first position, if the robot 110 has successfully got rid of the trouble in the traversing process, the robot 110 continues to perform the cleaning task in step S70, that is, the robot 110 restarts the cleaning assembly and the blower to clean the ground. If the robot 110 does not successfully get out of the poverty in the traversing process, the robot 110 determines whether all the first positions are traversed or not in step S80, and if not, the robot 110 continues to get out of the poverty only at the next first position. If all the first positions have been traversed and the robot 110 has not successfully got rid of the trouble yet, the process goes to step S90 where the robot 110 reports an error, and the robot 110 considers that the environment cannot get rid of the trouble by itself and needs user assistance, where the error may be reported by flashing lights on the robot 110 or sending an error sound or sending a message to the mobile terminal to prompt the user.
In this embodiment, after the robot 110 is trapped, the robot 110 is navigated to traverse the first position to get rid of the trapping by acquiring the obstacle information of the environment where the robot 110 is located, which is detected by the laser radar installed on the robot 110, obtaining the distance between the obstacles according to the image, and determining the first position where the robot 110 can pass through, and after the robot 110 is trapped, the position where the robot 110 is likely to get rid of the trapping is found, and the position where the robot 110 cannot get rid of the trapping is excluded, so that the trapping efficiency of the robot 110 is improved, the time for the robot 110 to determine whether to get rid of the trapping by itself is reduced, and the working efficiency of the robot 110 is improved.
In another embodiment of the present invention, a robot 110 is further provided, and the structure and the function of the robot 110 provided in this embodiment are substantially the same as those of the robot 110 described in the foregoing embodiment. The difference lies in that:
referring to fig. 15 and 16, fig. 15 is a block diagram of a robot according to another embodiment of the present invention, and fig. 16 is a flowchart of method steps executed by the control module according to another embodiment of the present invention. In this embodiment, the robot 110110 includes a driving mechanism 20, a laser radar 30, and a control module 40, and the structures and functions of the driving mechanism 20 and the laser radar 30 are the same as those described in the foregoing embodiment. The control module 40 is configured to execute the steps shown in fig. 16, where the executed step S20 obtains the image frames acquired by the laser radar of the robot 110 and the identification image of the step S30, and a process of obtaining the first distance between adjacent obstacles according to the obstacle information in the identified image is the same as the process described in the foregoing embodiment, and is not described herein again. In this embodiment, after the controller performs S30 recognition of the image, obtains a first distance between adjacent obstacles according to obstacle information in the recognized image, and then proceeds to step S401 to mark a first position where the robot 110 can pass and a second position where the robot 110 cannot pass according to the first distance between the adjacent obstacles. The process advances to step S402 to set a second position as a constraint characteristic, and the control module controls the robot 110 to perform avoidance operation according to the second position information. The first position and the second position are still determined by the minimum first distance between adjacent obstacles, and if the minimum first distance is greater than the diameter of the body of the robot 110, the first position is marked, and if the minimum first distance is less than or equal to the diameter of the body of the robot 110, the second position is marked. When the robot 110 starts to work or in the process of working, the first position and the second position may be displayed on the map when the map is created, and when the robot 110 performs path planning, the robot 110 may be controlled to perform avoidance operation by avoiding the second position or by walking to the second position by the robot 110, and the avoidance operation may be steering after backing a certain distance or direct steering. The robot 110 in this embodiment may be a service robot 110, an outdoor or indoor cleaning robot 110, or the like.
In this embodiment, when the robot 110 works, the distance between obstacles is obtained according to the image by obtaining the obstacle information of the environment where the robot 110 is located, which is detected by the laser radar installed on the robot 110, and determining the first position where the robot 110 can pass and the second position where the robot 110 cannot pass, and the path where the robot 110 travels is planned according to the first position and the second position, so that the robot 110 avoids the position where the robot 110 cannot pass to travel, thereby reducing the probability that the robot 110 is trapped, and improving the working efficiency of the robot 110.
In another embodiment, the present invention provides a robot system 100, please refer to fig. 17, and fig. 17 is a block diagram of the robot system in an embodiment of the present invention. The robot system 100 includes a robot 110 and a mobile terminal 120. The robot 110 includes a driving mechanism 20, a laser radar 30, and a control module 40, and the structures and functions of the driving mechanism 20 and the laser radar 30 are the same as those described in the foregoing embodiment. The control module 40 communicates with the lidar 30, the control module 40 is configured to execute a control process from step S20 to step S401 shown in fig. 16, the executed step S20 obtains image frames acquired by the lidar of the robot 110, the step S30 identifies images, and according to obstacle information in the identified images, a process of obtaining a first distance between adjacent obstacles and a process of marking a first position where the robot 110 can pass and a second position where the robot 110 cannot pass according to the first distance between the adjacent obstacles in step S401 are the same as those described in the foregoing embodiment, and details are not repeated here.
The difference in this embodiment is that the robot 110 creates an environment map, and the first position and the second position marked in step S401 are end points marked at two ends of the first position and the second position, respectively, that is, two end points of a straight line with the shortest distance between adjacent obstacles. The mobile terminal 120 communicates with the robot 110 to obtain an environment map of the robot 110, and two endpoints of a straight line with a shortest distance between adjacent obstacles where the first position and the second position are located are presented in different representation modes on the environment map, so that a user can distinguish the two endpoints. The different presentation manners may be manners in which shapes, sizes, colors, or dynamic changes of two end points of a straight line in which a shortest distance between adjacent obstacles at the first position and the second position is located are different, for example: two end points of a straight line representing the shortest distance between the adjacent obstacles at the first position are represented by rectangles, and two end points of a straight line representing the shortest distance between the adjacent obstacles at the second position are represented by circles, so that the two end points can be distinguished by users. The user can add a line segment at the second position through the mobile terminal, namely two end points of a straight line where the shortest distance between adjacent obstacles representing the second position is located are connected and added into the environment map. The connection line represents restriction information, and transmits the restriction information to the robot 110, and the robot 110 performs avoidance operation based on the restriction information.
In this embodiment, when the robot 110 works, the obstacle information of the environment where the robot 110 is located, which is detected by the laser radar installed on the robot 110, is obtained according to the image, the distance between the obstacles is obtained, the first position where the robot 110 can pass and the second position where the robot 110 cannot pass are determined, two end points of a straight line where the shortest distance between adjacent obstacles representing the second position is located are connected through the mobile terminal to form a line, and the connecting line represents the restriction information, so that the robot 110 avoids the second position where the robot 110 cannot pass to walk in the navigation process, the probability that the robot 110 is trapped is reduced, and the working efficiency of the robot 110 is improved.
In the description of the specification, references to "one embodiment," "some embodiments," "an example," "a specific example" or "an alternative embodiment" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (9)

1. A robot navigation method based on laser radar comprises the following steps:
determining that the robot is trapped;
acquiring an image frame acquired by a laser radar when the robot is trapped;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking one or more first locations through which the robot can pass based on a first distance between adjacent obstacles, comprising: determining a minimum first distance between adjacent obstacles; storing the location where the minimum first distance is greater than the robot fuselage diameter as a first location;
traversing the first location, including: sorting according to the minimum first distance of the first position, and traversing from large to small according to the first distance in sequence; or the distance from the robot to the first position is a second distance; sorting according to the size of the second distance, and traversing from small to large according to the second distance in sequence;
wherein the recognizing the image comprises recognizing obstacle information in the image, and the recognizing the obstacle information in the image comprises:
determining a first threshold value, wherein if the product of the reflection intensity value of the point cloud midpoint in the image frame and the square of the distance from the point to the laser radar is greater than the first threshold value, the point cloud is a reliable point cloud;
determining a second threshold associated with a location of the reliable point cloud to the robot;
dividing the reliable point cloud according to a second threshold value to obtain a plurality of point cloud blocks, wherein each point cloud block in the plurality of point cloud blocks represents an obstacle;
and taking the points segmented into the plurality of point cloud blocks as tearing points, and fitting each point cloud block into a straight line.
2. The lidar-based robot navigation method of claim 1, wherein the determining that the robot is trapped comprises:
the robot works normally and an environment map is established;
and when the working time of the robot in a certain area on the environment map reaches the preset time, determining that the robot is trapped.
3. The lidar-based robot navigation method of claim 2, further comprising: and setting a certain area on the environment map as an forbidden zone, and avoiding the forbidden zone by the robot in the following operation process.
4. The lidar-based robot navigation method of claim 1, wherein the determining that the robot is trapped comprises:
the robot is provided with a collision detection assembly, and in the working process of the robot, the number of times that the collision detection assembly is triggered within a preset time reaches a collision threshold value, and then the robot is determined to be trapped.
5. The lidar-based robot navigation method of claim 1, further comprising:
and sequentially traversing the first position and then judging whether the cleaning robot is successfully released, if so, recovering the cleaning robot to be in a normal working state, otherwise, stopping the cleaning robot and sending alarm information.
6. A robot, comprising:
a main body;
a drive mechanism configured to drive the robot to move over a ground surface;
a laser radar configured to detect obstacle information of an environment in which the robot is located; and
a control module configured to perform:
acquiring an image frame acquired by a laser radar when the robot is trapped;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking one or more first locations through which the robot can pass based on a first distance between adjacent obstacles, comprising: determining a minimum first distance between adjacent obstacles; storing a position where the first distance is greater than a diameter of a robot body as a first position;
sequentially traversing the first locations, including: sorting according to the minimum first distance of the first position, and traversing from large to small according to the first distance in sequence; or the distance from the robot to the first position is a second distance; sorting according to the size of the second distance, and traversing from small to large according to the second distance in sequence;
wherein the recognizing the image comprises recognizing obstacle information in the image, and the recognizing the obstacle information in the image comprises:
determining a first threshold value, wherein if the product of the reflection intensity value of the point cloud midpoint in the image frame and the square of the distance from the point to the laser radar is greater than the first threshold value, the point cloud is a reliable point cloud;
determining a second threshold associated with a location of the reliable point cloud to the robot;
dividing the reliable point cloud according to a second threshold value to obtain a plurality of point cloud blocks, wherein each point cloud block in the plurality of point cloud blocks represents an obstacle;
and taking the points segmented into the plurality of point cloud blocks as tearing points, and fitting each point cloud block into a straight line.
7. A robot, comprising:
a main body;
a drive mechanism configured to drive the robot to move over a ground surface;
a laser radar configured to detect obstacle information of an environment in which the robot is located; and
a control module configured to perform:
acquiring an image frame acquired by a laser radar of the robot;
identifying an image, and obtaining a first distance between adjacent obstacles according to obstacle information in the identified image;
marking a first position where the robot can pass and a second position where the robot cannot pass according to a first distance between adjacent obstacles; wherein said marking a first location through which the robot can pass based on a first distance between adjacent obstacles comprises: determining a minimum first distance between adjacent obstacles; storing a position where the first distance is greater than a diameter of a robot body as a first position;
traversing the first location, including: sorting according to the minimum first distance of the first position, and traversing from large to small according to the first distance in sequence; or the distance from the robot to the first position is a second distance; sorting according to the size of the second distance, and traversing from small to large according to the second distance in sequence;
setting a second position as a constraint characteristic, and controlling the robot to execute avoidance operation by the control module according to the second position information;
wherein the recognizing the image comprises recognizing obstacle information in the image, and the recognizing the obstacle information in the image comprises:
determining a first threshold value, and if the product of the reflection intensity value of the midpoint of the point clouds in the image frame and the square of the distance from the point to the laser radar is greater than the first threshold value, determining the point clouds to be reliable point clouds;
determining a second threshold associated with a location of the reliable point cloud to the robot;
dividing the reliable point cloud according to a second threshold value to obtain a plurality of point cloud blocks, wherein each point cloud block in the plurality of point cloud blocks represents an obstacle;
and taking the points segmented into the plurality of point cloud blocks as tearing points, and fitting each point cloud block into a straight line.
8. A robot according to claim 7, characterized in that the first position is a position where the smallest first distance between adjacent obstacles is larger than the diameter of the robot body, and the second position is a position where the smallest first distance between adjacent obstacles is smaller than or equal to the diameter of the robot body.
9. A robotic system, comprising:
a robot configured to:
acquiring an image frame acquired by a laser radar of the robot;
the method comprises the steps of identifying an image, obtaining a first distance between adjacent obstacles according to obstacle information in the identified image, wherein the identified image comprises the obstacle information in the identified image, and the obstacle information in the identified image comprises: determining a first threshold value, wherein if the product of the reflection intensity value of the point cloud midpoint in the image frame and the square of the distance from the point to the laser radar is greater than the first threshold value, the point cloud is a reliable point cloud; determining a second threshold associated with a location of the reliable point cloud to the robot; dividing the reliable point cloud according to a second threshold value to obtain a plurality of point cloud blocks, wherein each point cloud block in the plurality of point cloud blocks represents an obstacle; taking the points divided into the plurality of point cloud blocks as tearing points, and fitting each point cloud block into a straight line;
identifying a first position through which the robot can pass and a second position through which the robot cannot pass according to a first distance between adjacent obstacles;
traversing the first location, including: sorting according to the minimum first distance of the first position, and traversing from large to small according to the first distance in sequence; or the distance from the robot to the first position is a second distance; sorting according to the size of the second distance, and traversing from small to large according to the second distance in sequence; and
a mobile terminal configured to add a line segment representing restriction information at the second position, transmit the restriction information to the robot, and execute avoidance operation according to the restriction information;
wherein said marking a first location through which the robot can pass based on a first distance between adjacent obstacles comprises: determining a minimum first distance between adjacent obstacles; and storing the position of which the first distance is greater than the diameter of the robot body as a first position.
CN201910749492.7A 2019-08-14 2019-08-14 Robot system, robot and robot navigation method based on laser radar Active CN110554696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910749492.7A CN110554696B (en) 2019-08-14 2019-08-14 Robot system, robot and robot navigation method based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910749492.7A CN110554696B (en) 2019-08-14 2019-08-14 Robot system, robot and robot navigation method based on laser radar

Publications (2)

Publication Number Publication Date
CN110554696A CN110554696A (en) 2019-12-10
CN110554696B true CN110554696B (en) 2023-01-17

Family

ID=68737601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910749492.7A Active CN110554696B (en) 2019-08-14 2019-08-14 Robot system, robot and robot navigation method based on laser radar

Country Status (1)

Country Link
CN (1) CN110554696B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110908388B (en) * 2019-12-17 2023-08-11 小狗电器互联网科技(北京)股份有限公司 Robot trapped detection method and robot
CN111331602B (en) * 2020-03-13 2021-07-13 湖南格兰博智能科技有限责任公司 Scene recognition software algorithm applied to bedded mite removing robot
CN111427357A (en) * 2020-04-14 2020-07-17 北京石头世纪科技股份有限公司 Robot obstacle avoidance method and device and storage medium
CN112894824B (en) * 2021-02-08 2022-11-29 深圳市普渡科技有限公司 Robot control method and robot
CN113110498A (en) * 2021-05-08 2021-07-13 珠海市一微半导体有限公司 Robot escaping method based on single-point TOF
CN113419546B (en) * 2021-08-24 2021-11-26 新石器慧通(北京)科技有限公司 Unmanned vehicle control method, device, medium and electronic equipment
CN116540690A (en) * 2022-01-26 2023-08-04 追觅创新科技(苏州)有限公司 Robot navigation method, device, robot and storage medium
CN115877852B (en) * 2023-02-22 2023-06-13 深圳市欧拉智造科技有限公司 Robot motion control method, robot, and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104460666A (en) * 2014-10-27 2015-03-25 上海理工大学 Robot autonomous obstacle avoidance moving control method based on distance vectors
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN109085841A (en) * 2018-09-28 2018-12-25 北京奇虎科技有限公司 A kind of method and device that control robot is cleaned

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101949277B1 (en) * 2012-06-18 2019-04-25 엘지전자 주식회사 Autonomous mobile robot
CN104864863B (en) * 2014-02-21 2019-08-27 联想(北京)有限公司 A kind of routing resource and electronic equipment
CN105832252A (en) * 2015-01-14 2016-08-10 东莞缔奇智能股份有限公司 Autonomous mobile robot and control method thereof
CN108481321B (en) * 2017-01-09 2020-07-28 广东宝乐机器人股份有限公司 Robot movement control method and robot
CN109480708B (en) * 2018-12-19 2021-02-23 珠海市一微半导体有限公司 Position reminding method of cleaning robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104460666A (en) * 2014-10-27 2015-03-25 上海理工大学 Robot autonomous obstacle avoidance moving control method based on distance vectors
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN109085841A (en) * 2018-09-28 2018-12-25 北京奇虎科技有限公司 A kind of method and device that control robot is cleaned

Also Published As

Publication number Publication date
CN110554696A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN110554696B (en) Robot system, robot and robot navigation method based on laser radar
CN109947109B (en) Robot working area map construction method and device, robot and medium
JP7374547B2 (en) Exploration methods, devices, mobile robots and storage media
CN112415998B (en) Obstacle classification obstacle avoidance control system based on TOF camera
US11013385B2 (en) Automatic cleaning device and cleaning method
CN106821157A (en) The cleaning method that a kind of sweeping robot is swept the floor
CN112004645A (en) Intelligent cleaning robot
WO2021120999A1 (en) Autonomous robot
CN110580047B (en) Anti-falling traveling method of autonomous robot and autonomous robot
CN108628318B (en) Congestion environment detection method and device, robot and storage medium
KR20160144682A (en) Moving robot and controlling method thereof
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN110946508A (en) Control method and device of sweeping robot using laser radar and camera
WO2021238001A1 (en) Robot travelling control method and system, robot, and readable storage medium
CN212489787U (en) Mopping robot
CN112423639B (en) Autonomous walking type dust collector
US11960296B2 (en) Method and apparatus for autonomous mobile device
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
EP4191360A1 (en) Distance measurement device and robotic vacuum cleaner
CN113848944A (en) Map construction method and device, robot and storage medium
CN113786125A (en) Operation method, self-moving device and storage medium
US11986137B2 (en) Mobile robot
CN112674645A (en) Robot edge cleaning method and device
CN115755935A (en) Method for filling indoor map obstacles
CN110716554B (en) Vision-based household robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518110 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518110 Building A1, Yinxing Hi-tech Industrial Park, Guanlan Street Sightseeing Road, Longhua District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant