CN113532461A - Robot autonomous obstacle avoidance navigation method, equipment and storage medium - Google Patents

Robot autonomous obstacle avoidance navigation method, equipment and storage medium Download PDF

Info

Publication number
CN113532461A
CN113532461A CN202110772616.0A CN202110772616A CN113532461A CN 113532461 A CN113532461 A CN 113532461A CN 202110772616 A CN202110772616 A CN 202110772616A CN 113532461 A CN113532461 A CN 113532461A
Authority
CN
China
Prior art keywords
robot
sequence
obstacle avoidance
moving
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110772616.0A
Other languages
Chinese (zh)
Other versions
CN113532461B (en
Inventor
高岩
郝虹
尹青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Original Assignee
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Generation Information Industry Technology Research Institute Co Ltd filed Critical Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority to CN202110772616.0A priority Critical patent/CN113532461B/en
Publication of CN113532461A publication Critical patent/CN113532461A/en
Application granted granted Critical
Publication of CN113532461B publication Critical patent/CN113532461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/04Systems determining presence of a target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the specification provides a robot autonomous obstacle avoidance navigation method, which comprises the following steps: if the obstacle exists in the preset path according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a first pre-trained movement model, and outputting a predicted obstacle avoidance movement sequence; the predicted obstacle avoidance moving sequence comprises the moving direction of the robot and the moving speed of the robot; inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates the moving path according to the optimal moving sequence.

Description

Robot autonomous obstacle avoidance navigation method, equipment and storage medium
Technical Field
The present disclosure relates to the field of machine learning technologies, and in particular, to a method, a device, and a storage medium for autonomous obstacle avoidance navigation of a robot.
Background
With the rapid development of computer technology, sensor technology, artificial intelligence and other technologies, robot technology is becoming more and more mature, and the mobile robot type among them is most widely used and plays an increasingly important role in numerous industries such as home service, aerospace, industry and the like, and these various robots can well complete work in specific environments. However, since the mobile robot may be in a work environment with a plurality of variables such as an industrial park, a residential quarter, a construction site, etc., when the mobile robot moves along a preset moving path, it is inevitable that the mobile robot is hindered from moving along the preset path by encountering a plurality of types of obstacles. Therefore, whether the robot can realize obstacle avoidance navigation is one of the key technologies for the robot to complete preset tasks in a position environment.
The existing obstacle avoidance navigation method mainly depends on laser radar three-dimensional point cloud data. The method can realize obstacle avoidance by sensing the distance of the obstacle, but the laser radar has higher cost, larger data volume, relatively complex calculation process and high requirement on a calculation platform. Other autonomous obstacle avoidance navigation technologies also have the defects of complex structure, expensive hardware cost and high maintenance cost, and are not suitable for the rapidly-increasing development requirements of robots.
Therefore, a method for reducing the obstacle avoidance cost while ensuring the accuracy of the obstacle avoidance navigation of the mobile robot is needed.
Disclosure of Invention
One or more embodiments of the present specification provide a method, an apparatus, and a storage medium for autonomous obstacle avoidance navigation of a robot, so as to solve the following technical problems: how to provide an autonomous obstacle avoidance navigation method which can reduce the obstacle avoidance cost under the condition of ensuring the accuracy of the obstacle avoidance navigation of a mobile robot.
One or more embodiments of the present disclosure adopt the following technical solutions:
one or more embodiments of the present specification provide a method for autonomous obstacle avoidance navigation of a robot, including:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
Optionally, in one or more embodiments of the present specification, the determining that an obstacle exists in the preset path according to a detection result of the ultrasonic radar on the preset path of the robot specifically includes:
the ground clearance height of the bottommost structure of the robot is predetermined and is used as a preset threshold value for ultrasonic radar detection;
if the height of the obstacle detected by the ultrasonic radar is lower than the preset threshold value, filtering detection information of the obstacle;
and if the height of the obstacle detected by the ultrasonic radar is higher than or equal to the preset threshold value, determining that the obstacle exists in the preset path, and recording the detection result of the ultrasonic radar.
Optionally, in one or more embodiments of the present specification, before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first pre-trained movement model, the method further includes:
acquiring a preset navigation map of the working environment of the robot according to the working environment of the robot; the preset navigation map comprises a barrier-free passing path in the robot work environment, so that a preset path of the robot is obtained according to the barrier-free passing path;
acquiring a movement sequence in the preset path of the robot according to the preset path, and acquiring position and posture information of the robot in real time through the navigation map; wherein the position and orientation information at least includes: and coordinate information of the position of the robot.
Optionally, in one or more embodiments of the present specification, before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first pre-trained movement model, the method further includes:
constructing a deployment environment which is the same as or similar to the working environment of the robot to determine a data set containing the obstacle avoidance data of the robot;
intercepting obstacle avoidance data of the robot, and dividing the obstacle avoidance data into obstacle avoidance data segments corresponding to a plurality of independent time segments according to a preset period; wherein the obstacle avoidance data segment at least comprises: the robot comprises a position and posture sequence of the robot in the time slice, an image shot by the robot in the time slice and a movement sequence of the robot in the time slice;
and inputting the position and posture sequence of the time segment of the robot and the position and posture sequence corresponding to the time segment after the time segment into a first moving model for training so as to train a first moving model meeting the requirements.
Optionally, in one or more embodiments of the present specification, the constructing a deployment environment the same as or similar to the robot working environment to determine a data set including the robot obstacle avoidance data specifically includes:
selecting a deployment environment of the robot training process according to the working environment of the robot; wherein, a plurality of obstacles which can appear are arranged in the deployment environment;
controlling the robot to perform obstacle avoidance movement in the deployment environment, and recording obstacle avoidance data of the robot in the obstacle avoidance movement; wherein the obstacle avoidance data at least comprises: the moving track data of the robot and the position and posture data of the robot;
and taking the collected obstacle avoidance data of the robot as a data set for training the first mobile model.
Optionally, in one or more embodiments of the present specification, before controlling the robot to perform obstacle avoidance motion in the deployment environment, and recording obstacle avoidance data of the robot in the obstacle avoidance motion, the method further includes:
if a movable obstacle exists in the working environment of the robot, setting a moving object which is the same as or similar to the movable obstacle in the deployment environment;
and simulating the moving track of the moving object to enable the moving object to move according to the moving track so as to ensure the reliability of the robot training process.
Optionally, in one or more embodiments of the present specification, before the inputting the predicted obstacle avoidance movement sequence and the image captured by the camera into a second movement model trained in advance and outputting the scoring sequence of the robot, the method further includes:
inputting continuous images shot by the camera into a first training model of the second movement model, extracting characteristic values of the images, and outputting an image characteristic diagram of the images;
and inputting the image characteristic graph and the movement action sequence corresponding to the time segment and the time segment after the time segment into a second training model of the second movement model to output a scoring sequence.
Optionally, in one or more embodiments of the present specification, the determining an optimal obstacle avoidance moving sequence of the robot according to a sum of scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence specifically includes:
sequentially adding the scoring values in the scoring sequence to obtain the sum of the scoring values of the scoring sequence;
taking the sum of the scoring values in the scoring sequence as a corresponding value score of the predicted obstacle avoidance moving sequence; wherein the higher the value score, the higher the predicted obstacle avoidance movement sequence priority;
and selecting the predicted obstacle avoidance moving sequence with high priority as the optimal obstacle avoidance moving sequence of the robot, and re-planning a path according to the optimal obstacle avoidance moving sequence of the robot so as to enable the robot to avoid obstacles in the preset path.
One or more embodiments of the present specification provide an apparatus for autonomous obstacle avoidance navigation of a robot, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects: and the obstacle avoidance motion is carried out only after the obstacle is monitored by the low-cost ultrasonic radar, so that the obstacle avoidance navigation process has pertinence. The method for sensing the environment image by carrying the camera senses the environment image, and analyzes and processes the environment image by using an image processing technology, so that environment information is provided, obstacle avoidance detection is completed, namely the obstacle avoidance detection cost is reduced, and compared with obstacle avoidance data obtained by a 3D point cloud data camera, the method is relatively small in data volume and convenient to analyze and process. And after the predicted obstacle avoidance moving sequence is obtained by combining with a deep learning model, priority sequencing is carried out to obtain an optimal moving sequence, an optimal moving path is provided while an updating path is ensured, and the autonomous obstacle avoidance navigation of the robot has the characteristics of high efficiency and reliability.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
In the drawings:
fig. 1 is a flowchart illustrating a method for autonomous obstacle avoidance navigation of a robot according to one or more embodiments of the present disclosure;
fig. 2 is a schematic internal structural diagram of an apparatus for autonomous obstacle avoidance navigation of a robot according to one or more embodiments of the present disclosure;
fig. 3 is a schematic diagram of an internal structure of a non-volatile memory according to one or more embodiments of the present disclosure.
Detailed Description
The embodiment of the specification provides a method, equipment and medium for autonomous obstacle avoidance navigation of a robot.
A mobile robot is a machine device that can perform a specified task. The mobile robot has extremely high similarity with the human working mode, can not only complete the task of huge workload, but also realize certain high-requirement fine production targets. The obstacle avoidance means that the robot senses dynamic or static obstacles existing on the route planning through a sensor in the walking and moving process, updates the path in real time according to a certain algorithm, avoids the obstacles and finally reaches the destination.
With intelligent robots, for example, sweeping robots, home service robots, and patrol robots are increasingly widely used in patrol monitoring of public environments, industrial production, and various services of home life. People have higher and higher requirements on the robot to realize flexible, efficient and intelligent movement and have autonomous obstacle avoidance navigation capability. The autonomous obstacle avoidance navigation technology is one of key indexes for evaluating the intelligent degree of the robot, embodies the processing capacity of the robot on unknown obstacles, and is also one of key technologies for completing preset tasks under the basis of an unknown environment. The mobile robot is in an unknown, complex and dynamic unstructured environment, and has the capability of sensing the environmental information of the mobile robot by using a sensor carried by the mobile robot under the condition of no manual intervention, and the mobile robot models the environment, can avoid obstacles autonomously, and simultaneously has the capability of reducing the consumption of time and energy as much as possible.
The existing navigation obstacle avoidance method mainly depends on laser radar 3D point cloud data, the obstacle avoidance is realized by sensing distance information of obstacles, but when the laser radar is used for obstacle avoidance in the practical application process, the cost for acquiring the 3D point cloud data is high, the 3D point cloud data acquired by the laser radar is large in quantity, the calculation process is complex, and the requirement on a calculation platform is high.
In order to solve the above technical problem, an embodiment of the present specification provides a method for robot autonomous obstacle avoidance navigation, where a motion trajectory of a robot is analyzed through continuous data images obtained by a camera, so as to obtain obstacle avoidance data of the robot. And inputting obstacle avoidance data into the first movement model to output the predicted obstacle avoidance movement sequence as a predicted path, and grading each action in the movement sequence through the grading model to obtain the integral grading value of the predicted obstacle avoidance movement sequence. And obtaining the priority of the predicted obstacle avoidance moving sequence through the scoring value, thereby determining the optimal moving sequence to update the moving path. The laser radar is replaced by a camera with relatively low cost and data volume, the environmental image is sensed by a method of carrying the camera, and is analyzed and processed by using an image processing technology, so that environmental information is provided, obstacle avoidance detection is completed, namely the obstacle avoidance detection cost is reduced, and the analysis and processing are facilitated due to the fact that the data volume of the obstacle avoidance data obtained by the 3D point cloud data camera is relatively small. After the predicted obstacle avoidance moving sequence is obtained by combining with the deep learning model, the optimal moving sequence is obtained by carrying out priority sequencing, the optimal moving path is provided while the updating path is ensured, and the autonomous obstacle avoidance navigation of the robot has the characteristics of high efficiency and reliability.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present specification without any creative effort shall fall within the protection scope of the present specification.
In one or more embodiments of the present disclosure, the server terminal or each execution unit corresponding to the server, which controls the robot to perform autonomous obstacle avoidance navigation, operates.
The technical solution provided in the present specification is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, one or more embodiments of the present disclosure provide a flow chart of a method for autonomous obstacle avoidance navigation of a robot.
The process in fig. 1 may include the following steps:
s101: if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot.
In one or more embodiments of the present disclosure, the determining, according to a detection result of an ultrasonic radar on a preset path of the robot, that an obstacle exists in the preset path specifically includes:
the ground clearance height of the bottommost structure of the robot is predetermined and is used as a preset threshold value for ultrasonic radar detection;
if the height of the obstacle detected by the ultrasonic radar is lower than the preset threshold value, filtering detection information of the obstacle;
and if the height of the obstacle detected by the ultrasonic radar is higher than or equal to the preset threshold value, determining that the obstacle exists in the preset path, and recording the detection result of the ultrasonic radar.
In one or more embodiments of the present disclosure, before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first movement model trained in advance, the method further includes:
acquiring a preset navigation map of the working environment of the robot according to the working environment of the robot; the preset navigation map comprises a barrier-free passing path in the robot work environment, so that a preset path of the robot is obtained according to the barrier-free passing path;
acquiring a movement sequence in the preset path of the robot according to the preset path, and acquiring position and posture information of the robot in real time through the navigation map; wherein the position and orientation information at least includes: and coordinate information of the position of the robot.
In one or more embodiments of the present disclosure, before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first movement model trained in advance, the method further includes:
constructing a deployment environment which is the same as or similar to the working environment of the robot to determine a data set containing the obstacle avoidance data of the robot;
intercepting obstacle avoidance data of the robot, and dividing the obstacle avoidance data into obstacle avoidance data segments corresponding to a plurality of independent time segments according to a preset period; wherein the obstacle avoidance data segment at least comprises: the robot comprises a position and posture sequence of the robot in the time slice, an image shot by the robot in the time slice and a movement sequence of the robot in the time slice;
and inputting the position and posture sequence of the time segment of the robot and the position and posture sequence corresponding to the time segment after the time segment into a first moving model for training so as to train a first moving model meeting the requirements.
In one or more embodiments of the present specification, the constructing a deployment environment the same as or similar to the robot working environment to determine a data set including the robot obstacle avoidance data specifically includes:
selecting a deployment environment of the robot training process according to the working environment of the robot; wherein, a plurality of obstacles which can appear are arranged in the deployment environment;
controlling the robot to perform obstacle avoidance movement in the deployment environment, and recording obstacle avoidance data of the robot in the obstacle avoidance movement; wherein the obstacle avoidance data at least comprises: the moving track data of the robot and the position and posture data of the robot;
and taking the collected obstacle avoidance data of the robot as a data set for training the first mobile model.
In one or more embodiments of the present specification, before controlling the robot to perform obstacle avoidance motion in the deployment environment and recording obstacle avoidance data of the robot in the obstacle avoidance motion, the method further includes:
if a movable obstacle exists in the working environment of the robot, setting a moving object which is the same as or similar to the movable obstacle in the deployment environment;
and simulating the moving track of the moving object to enable the moving object to move according to the moving track so as to ensure the reliability of the robot training process.
With the spread of robots in homes, parks, factories, and other places, mobile robots have attracted more and more attention to the realization of various functions of guidance, patrol, and monitoring instead of manual work. In order to efficiently complete the work of the robot and simultaneously avoid irreparable serious damage to the robot caused by severe collision, the autonomous obstacle avoidance and path re-planning is one of functions which the mobile robot needs to have in the service process. The necessary condition for realizing obstacle avoidance and navigation in the moving process of the robot is environmental perception, and the obstacle avoidance in an unknown or partially unknown environment needs to acquire surrounding environmental information including information such as the size, shape and position of an obstacle through a sensor.
With the fact that deep learning is more and more mature on visual tasks, predicted obstacle avoidance moving paths are output through training along with a deep learning model. When the obstacle avoidance data set of the mobile robot is manufactured, an environment which is the same as or similar to the working environment of the robot is constructed for exploration, so that the trained model is closer to the actual working environment, the error of recognition and obstacle avoidance is smaller, and the result is more reliable. After the trained deployment environment for the robot is selected, obstacles that may be present may be set in the trained deployment environment. When the robot meets an obstacle, the robot is controlled to perform obstacle avoidance motion in a deployed environment, and the moving track of the robot, the position and posture information of the robot and continuous pictures of a first visual angle shot by a camera arranged on the robot in the obstacle avoidance motion are recorded as obstacle avoidance data in the obstacle avoidance motion process. The robot can trigger the obstacle avoidance process only after encountering an obstacle, and the obstacle avoidance data generated in the obstacle avoidance motion are intercepted to obtain the obstacle avoidance data generated in the obstacle avoidance process. And collecting obstacle avoidance data generated in the robot simulation obstacle avoidance process as a data set for training the first mobile model.
It should be noted that in the process of simulating obstacle avoidance by the robot, a static obstacle or a movable obstacle may exist. If a movable obstacle exists in the environment in which the robot works, a moving object which is the same as or similar to the movable obstacle needs to be arranged in the process of carrying out simulation training and collecting the data set. And the moving track of the moving object is simulated to move in the deployed training environment so as to collect reliable obstacle avoidance data which accords with the working environment of the robot. For example: the museum is provided with a movable explaining robot which replaces an explaining person such as a tour guide, and the explaining robot inevitably meets various persons who stay or move in the museum and visit in the moving explaining process in the museum. Various moving people need to be deployed in the training environment and the moving tracks of most people in the museum need to be simulated to move, so that the explaining robot can realize efficient obstacle avoidance.
And setting a preset period according to the accuracy of a moving path required by the robot, and dividing the intercepted obstacle avoidance data into a plurality of independent time segments to obtain obstacle avoidance data segments corresponding to the independent time segments. And inputting the position and posture information of the time segment of the robot, the position and posture sequence corresponding to the time segment after the time segment of the robot and the robot into the first mobile model for training so as to train the first mobile model meeting the requirement. For example: for a mobile robot with smaller requirement on obstacle avoidance precision, the division period of the independent time segment can be properly prolonged, and for a mobile robot with higher requirement on obstacle avoidance precision, such as a tour robot in a museum, the requirement on the precision of an obstacle avoidance path is higher, the division period of the independent time skewness can be shortened, so that a first mobile model meeting the requirement is obtained.
The scanning data acquired by the ultrasonic radar replaces the traditional laser radar 3D point cloud data to detect the obstacles on the robot running path, so that the detection cost can be reduced, meanwhile, the process of data analysis and calculation can be saved, and the method is suitable for various calculation platforms.
Before the ultrasonic radar is monitored, the ground height of the bottommost structure of the robot needs to be determined in advance, and the ground height is used as the lowest threshold value in the ultrasonic radar detection process. If the detected obstacle is lower than the threshold value, the height of the obstacle is lower than the ground height of the lowest end structure of the robot, the obstacle can pass over the obstacle, and information of the obstacle does not need to be collected for subsequent obstacle avoidance. For example: when the sweeping robot is used for sweeping, garbage lower than the chassis structure can be collected, and at the moment, if the moving path is updated again, the basic work of the sweeping robot cannot be finished. If the height of the obstacle detected by the ultrasonic radar is higher than or equal to the preset threshold value, the obstacle can be determined to be an unavoidable obstacle on the preset advancing path of the robot, information of the obstacle needs to be collected, and the detection result of the ultrasonic radar is recorded as the obstacle.
Considering that several obstacles in the real society are small in size and/or light in weight, the robot can remove the obstacles through collision, and the path does not need to be re-planned. For example: the inspection robot replaces security personnel to monitor the environment in a community, can scan a vertically placed pop can through an ultrasonic radar in a preset inspection path, although the height of the inspection robot is possibly higher than that of the structure at the bottom end of the inspection robot to a certain extent to be used as a barrier to block the inspection robot from normally passing, the inspection robot is large in size under a general condition and cannot damage the robot after colliding with the pop can, and the path does not need to be re-planned. Therefore, in one or more embodiments of the present specification, the determining that the robot exists in the preset path according to the detection result of the ultrasonic radar on the preset path of the robot may further include:
determining the volume of the inspection robot according to the model of the robot; presetting an obstacle volume threshold value of the ultrasonic detection according to the volume;
if the size of the obstacle detected by the ultrasonic radar is lower than the obstacle size threshold value, determining that the obstacle is a collision-capable obstacle and filtering detection information of the obstacle;
and if the size of the obstacle detected by the ultrasonic radar is higher than or equal to the obstacle volume threshold value, determining that the obstacle is an incorruptable obstacle, and recording the detection result of the ultrasonic radar.
After the fact that the obstacle exists in the preset path of the robot is determined through the ultrasonic radar, the image shot by the camera installed on the robot is obtained, and the position information of the obstacle can be obtained according to the continuous image shot by the camera. In addition, the position and posture information of the robot, the current sequence of the robot and the movement sequence of the robot on the preset path at the next moment are input into the first movement model trained in advance, so that the obstacle avoidance movement sequence comprising the prediction of the movement direction of the robot and the movement speed of the robot can be output.
Before inputting the position and posture information of the robot, the current movement sequence of the robot and the movement sequence in the preset path of the robot into the pre-trained first movement model, the preset navigation map of the working environment of the robot can be called through the internet or other legal ways according to the working environment of the robot. The barrier-free passing route of the passable area in the robot working environment can be obtained through the preset navigation map, and the preset route of the robot can be obtained after the starting point and the end point of the task activity of the robot are known according to the barrier-free communication route. The moving direction and the moving speed of the robot can be specified according to the preset path, namely, the moving sequence of the robot in the preset path can be determined according to the preset path. Meanwhile, the position and posture information of the robot in the moving process can be obtained in real time according to the positioning of the navigation map, for example: the robot is in a certain coordinate position of the navigation map, and the coordinate position can be used as the position and posture information of the robot at the current moment.
S102: inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode.
In one or more embodiments of the present disclosure, before the inputting the predicted obstacle avoidance movement sequence and the image captured by the camera into a second movement model trained in advance and outputting the scoring sequence of the robot, the method further includes:
inputting continuous images shot by the camera into a first training model of the second movement model, extracting characteristic values of the images, and outputting an image characteristic diagram of the images;
and inputting the image characteristic graph and the movement action sequence corresponding to the time segment and the time segment after the time segment into a second training model of the second movement model to output a scoring sequence.
After the ultrasonic radar triggers the robot obstacle avoidance navigation work and the predicted obstacle avoidance moving sequence is obtained through the first moving model, the predicted obstacle avoidance moving sequence is analyzed through the second moving model to obtain the optimal moving sequence in the obstacle avoidance process, and therefore the obstacle avoidance process is efficient and reliable.
The second movement model is composed of a first training model and a second training model, and the first training model is used for example: and the convolutional neural network model is used for extracting an image shot by a camera arranged on the robot, and outputting a characteristic diagram of the image after processing. Inputting the feature map of the image and the obstacle avoidance moving sequence corresponding to the time slice and the time slice after the time slice into a second training model, for example: and obtaining a scoring model corresponding to the predicted obstacle avoidance moving sequence in the long-term and short-term memory network model. It should be noted that the values in the scoring model correspond to the respective values in the predicted obstacle avoidance movement sequence one to one.
S103: and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
In one or more embodiments of the present disclosure, the determining, according to a sum of scoring values in the scoring sequence, an optimal obstacle avoidance moving sequence of the robot, so that the robot updates a moving path according to the optimal moving sequence specifically includes:
sequentially adding the scoring values in the scoring sequence to obtain the sum of the scoring values of the scoring sequence;
taking the sum of the scoring values in the scoring sequence as a corresponding value score of the predicted obstacle avoidance moving sequence; wherein the higher the value score, the higher the predicted obstacle avoidance movement sequence priority;
and selecting the predicted obstacle avoidance moving sequence with high priority as the optimal obstacle avoidance moving sequence of the robot, and re-planning a path according to the optimal obstacle avoidance moving sequence of the robot so as to enable the robot to avoid obstacles in the preset path.
And sequentially adding the scoring values in the scoring sequence to obtain the total scoring value of the scoring value sequence. As can be seen from the contents described in step 102, the values in the score sequence correspond to the values in the corresponding predicted obstacle avoidance movement sequence in a one-to-one manner. I.e. the scoring sequence is the score made for each action in the predicted obstacle avoidance movement sequence. And taking the total scoring value of the scoring sequence as the priority of the predicted obstacle avoidance moving sequence, so as to obtain the optimal obstacle avoidance moving sequence. According to the obstacle avoidance moving sequence recorded in the step 101, the obstacle avoidance moving sequence includes the moving speed of the robot and the moving direction of the robot. Therefore, after the optimal obstacle avoidance moving sequence is obtained according to the scoring sequence, the moving path of the robot can be planned again, so that the robot can efficiently and reliably avoid obstacles in the preset path. For example: after the obstacle is detected, the mobile robot is updated to a moving path with 10 degrees 1m/s north and east from a moving path with 2m/s north, and then the mobile robot needs to move according to the updated moving path so as to avoid the obstacle appearing in the preset path.
As shown in fig. 2, in one or more embodiments of the present specification, there is provided an apparatus for autonomous obstacle avoidance navigation of a robot, the apparatus including:
at least one processor 201; and the number of the first and second groups,
a memory 202 communicatively coupled to the at least one processor 201; wherein the content of the first and second substances,
the memory 202 stores instructions for execution by the at least one processor 201 that are executed by the at least one processor 201 to enable the at least one processor 201 to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
As shown in fig. 3, in one or more embodiments of the present description, a non-volatile storage medium is provided, storing computer-executable instructions 301, the computer-executable instructions 301 configured to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A robot autonomous obstacle avoidance navigation method is characterized by comprising the following steps:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
2. The method for robot autonomous obstacle avoidance navigation according to claim 1, wherein the determining that an obstacle exists in the preset path according to a detection result of an ultrasonic radar on the preset path of the robot specifically comprises:
the ground clearance height of the bottommost structure of the robot is predetermined and is used as a preset threshold value for ultrasonic radar detection;
if the height of the obstacle detected by the ultrasonic radar is lower than the preset threshold value, filtering detection information of the obstacle;
and if the height of the obstacle detected by the ultrasonic radar is higher than or equal to the preset threshold value, determining that the obstacle exists in the preset path, and recording the detection result of the ultrasonic radar.
3. The method of claim 1, wherein before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first pre-trained movement model, the method further comprises:
acquiring a preset navigation map of the working environment of the robot according to the working environment of the robot; the preset navigation map comprises a barrier-free passing path in the robot work environment, so that a preset path of the robot is obtained according to the barrier-free passing path;
acquiring a movement sequence in the preset path of the robot according to the preset path, and acquiring position and posture information of the robot in real time through the navigation map; wherein the position and orientation information at least includes: and coordinate information of the position of the robot.
4. The method of claim 1, wherein before inputting the position and orientation information of the robot, the current movement sequence of the robot, and the movement sequence in the preset path of the robot into the first pre-trained movement model, the method further comprises:
constructing a deployment environment which is the same as or similar to the working environment of the robot to determine a data set containing the obstacle avoidance data of the robot;
intercepting obstacle avoidance data of the robot, and dividing the obstacle avoidance data into obstacle avoidance data segments corresponding to a plurality of independent time segments according to a preset period; wherein the obstacle avoidance data segment at least comprises: the robot comprises a position and posture sequence of the robot in the time slice, an image shot by the robot in the time slice and a movement sequence of the robot in the time slice;
and inputting the position and posture sequence of the time segment of the robot and the position and posture sequence corresponding to the time segment after the time segment into a first moving model for training so as to train a first moving model meeting the requirements.
5. The method of claim 4, wherein the constructing a deployment environment the same as or similar to the robot working environment to determine the data set including the robot obstacle avoidance data includes:
selecting a deployment environment of the robot training process according to the working environment of the robot; wherein, a plurality of obstacles which can appear are arranged in the deployment environment;
controlling the robot to perform obstacle avoidance movement in the deployment environment, and recording obstacle avoidance data of the robot in the obstacle avoidance movement; wherein the obstacle avoidance data at least comprises: the moving track data of the robot and the position and posture data of the robot;
and taking the collected obstacle avoidance data of the robot as a data set for training the first mobile model.
6. The method of claim 5, wherein before controlling the robot to perform obstacle avoidance movement in the deployment environment and recording obstacle avoidance data of the robot in the obstacle avoidance movement, the method further comprises:
if a movable obstacle exists in the working environment of the robot, setting a moving object which is the same as or similar to the movable obstacle in the deployment environment;
and simulating the moving track of the moving object to enable the moving object to move according to the moving track so as to ensure the reliability of the robot training process.
7. The method of claim 4, wherein before inputting the predicted obstacle avoidance movement sequence and the image captured by the camera into a second movement model trained in advance and outputting the scoring sequence of the robot, the method further comprises:
inputting continuous images shot by the camera into a first training model of the second movement model, extracting characteristic values of the images, and outputting an image characteristic diagram of the images;
and inputting the image characteristic graph and the predicted obstacle avoidance moving sequence corresponding to the time segment and the time segment after the time segment into a second training model of the second moving model to output a grading sequence.
8. The method according to claim 1, wherein the determining an optimal obstacle avoidance moving sequence of the robot according to a sum of scoring values in the scoring sequence so that the robot updates a moving path according to the optimal moving sequence specifically comprises:
sequentially adding the scoring values in the scoring sequence to obtain the sum of the scoring values of the scoring sequence;
taking the sum of the scoring values in the scoring sequence as a corresponding value score of the predicted obstacle avoidance moving sequence; wherein the higher the value score, the higher the predicted obstacle avoidance movement sequence priority;
and selecting the predicted obstacle avoidance moving sequence with high priority as the optimal obstacle avoidance moving sequence of the robot, and re-planning a path according to the optimal obstacle avoidance moving sequence of the robot so as to enable the robot to avoid obstacles in the preset path.
9. An apparatus for autonomous obstacle avoidance navigation of a robot, the apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions for execution by the at least one processor to cause the at least one processor to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
10. A non-volatile storage medium storing computer-executable instructions configured to:
if the fact that an obstacle exists in the preset path is determined according to the detection result of the ultrasonic radar on the preset path of the robot, acquiring an image shot by a camera installed on the robot, inputting position and posture information of the robot, a current movement sequence of the robot and a movement sequence in the preset path of the robot into a pre-trained first movement model, and outputting a predicted obstacle avoidance movement sequence; wherein the predicted obstacle avoidance movement sequence comprises a movement direction of the robot and a movement speed of the robot;
inputting the predicted obstacle avoidance moving sequence and the image shot by the camera into a pre-trained second moving model, and outputting a scoring sequence of the robot; the scoring sequence comprises scoring values of all items in the predicted obstacle avoidance moving sequence, and the values in the scoring sequence correspond to the values in the predicted obstacle avoidance moving sequence in a one-to-one mode;
and determining the optimal obstacle avoidance moving sequence of the robot according to the sum of the scoring values in the scoring sequence, so that the robot updates a moving path according to the optimal moving sequence.
CN202110772616.0A 2021-07-08 2021-07-08 Robot autonomous obstacle avoidance navigation method, equipment and storage medium Active CN113532461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772616.0A CN113532461B (en) 2021-07-08 2021-07-08 Robot autonomous obstacle avoidance navigation method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772616.0A CN113532461B (en) 2021-07-08 2021-07-08 Robot autonomous obstacle avoidance navigation method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113532461A true CN113532461A (en) 2021-10-22
CN113532461B CN113532461B (en) 2024-02-09

Family

ID=78098288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772616.0A Active CN113532461B (en) 2021-07-08 2021-07-08 Robot autonomous obstacle avoidance navigation method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113532461B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114047757A (en) * 2021-11-05 2022-02-15 季华实验室 Multi-AGV path evaluation planning method
CN114265416A (en) * 2022-02-28 2022-04-01 季华实验室 AGV trolley control method and device, electronic equipment and storage medium
TWI832420B (en) * 2021-12-22 2024-02-11 友達光電股份有限公司 Cleaning path planning method and robotic vacuum cleaner
CN117908031A (en) * 2024-01-20 2024-04-19 广东图灵智新技术有限公司 Autonomous navigation system of robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255182A (en) * 2018-01-30 2018-07-06 上海交通大学 A kind of service robot pedestrian based on deeply study perceives barrier-avoiding method
CN110069056A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 A kind of ambulation control method applied to sweeping robot
CN110989631A (en) * 2019-12-30 2020-04-10 科沃斯机器人股份有限公司 Self-moving robot control method, device, self-moving robot and storage medium
CN111045428A (en) * 2019-12-27 2020-04-21 深圳前海达闼云端智能科技有限公司 Obstacle avoidance method, mobile robot and computer-readable storage medium
CN111381594A (en) * 2020-03-09 2020-07-07 兰剑智能科技股份有限公司 AGV space obstacle avoidance method and system based on 3D vision
CN111542836A (en) * 2017-10-04 2020-08-14 华为技术有限公司 Method for selecting action for object by using neural network
CN112445209A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Robot control method, robot, storage medium, and electronic apparatus
CN112650235A (en) * 2020-03-11 2021-04-13 南京奥拓电子科技有限公司 Robot obstacle avoidance control method and system and robot
CN112766499A (en) * 2021-02-02 2021-05-07 电子科技大学 Method for realizing autonomous flight of unmanned aerial vehicle through reinforcement learning technology
WO2021103987A1 (en) * 2019-11-29 2021-06-03 深圳市杉川机器人有限公司 Control method for sweeping robot, sweeping robot, and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111542836A (en) * 2017-10-04 2020-08-14 华为技术有限公司 Method for selecting action for object by using neural network
CN110069056A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 A kind of ambulation control method applied to sweeping robot
CN108255182A (en) * 2018-01-30 2018-07-06 上海交通大学 A kind of service robot pedestrian based on deeply study perceives barrier-avoiding method
CN112445209A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Robot control method, robot, storage medium, and electronic apparatus
WO2021103987A1 (en) * 2019-11-29 2021-06-03 深圳市杉川机器人有限公司 Control method for sweeping robot, sweeping robot, and storage medium
CN111045428A (en) * 2019-12-27 2020-04-21 深圳前海达闼云端智能科技有限公司 Obstacle avoidance method, mobile robot and computer-readable storage medium
CN110989631A (en) * 2019-12-30 2020-04-10 科沃斯机器人股份有限公司 Self-moving robot control method, device, self-moving robot and storage medium
CN111381594A (en) * 2020-03-09 2020-07-07 兰剑智能科技股份有限公司 AGV space obstacle avoidance method and system based on 3D vision
CN112650235A (en) * 2020-03-11 2021-04-13 南京奥拓电子科技有限公司 Robot obstacle avoidance control method and system and robot
CN112766499A (en) * 2021-02-02 2021-05-07 电子科技大学 Method for realizing autonomous flight of unmanned aerial vehicle through reinforcement learning technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARTÚR I. KÁROLY等: "Optical flow-based segmentation of moving objects for mobile robot navigation using pre-trained deep learning models", 《2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC)》 *
刘祚时;罗爱华;童俊华;: "全自主机器人避障控制方法的研究", 煤矿机电, no. 02 *
石鸿雁, 孙茂相, 孙昌志: "未知环境下移动机器人路径规划方法", 沈阳工业大学学报, no. 01 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114047757A (en) * 2021-11-05 2022-02-15 季华实验室 Multi-AGV path evaluation planning method
CN114047757B (en) * 2021-11-05 2023-05-19 季华实验室 Multi-AGV path evaluation planning method
TWI832420B (en) * 2021-12-22 2024-02-11 友達光電股份有限公司 Cleaning path planning method and robotic vacuum cleaner
CN114265416A (en) * 2022-02-28 2022-04-01 季华实验室 AGV trolley control method and device, electronic equipment and storage medium
CN117908031A (en) * 2024-01-20 2024-04-19 广东图灵智新技术有限公司 Autonomous navigation system of robot

Also Published As

Publication number Publication date
CN113532461B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN113532461B (en) Robot autonomous obstacle avoidance navigation method, equipment and storage medium
Dai et al. Fast frontier-based information-driven autonomous exploration with an mav
CN107861508B (en) Local motion planning method and device for mobile robot
US11688081B2 (en) Method of performing simultaneous localization and mapping with respect to a salient object in an image
Frey et al. Locomotion policy guided traversability learning using volumetric representations of complex environments
CN109163722B (en) Humanoid robot path planning method and device
CN113705636B (en) Method and device for predicting track of automatic driving vehicle and electronic equipment
Bruce et al. Learning deployable navigation policies at kilometer scale from a single traversal
KR102629036B1 (en) Robot and the controlling method thereof
US11514363B2 (en) Using a recursive reinforcement model to determine an agent action
Prieto et al. A methodology to monitor construction progress using autonomous robots
KR20160048530A (en) Method and apparatus for generating pathe of autonomous vehicle
KR20210063791A (en) System for mapless navigation based on dqn and slam considering characteristic of obstacle and processing method thereof
CN113433937A (en) Heuristic exploration-based layered navigation obstacle avoidance system and layered navigation obstacle avoidance method
CN114003035A (en) Method, device, equipment and medium for autonomous navigation of robot
US11467598B2 (en) Method of estimating position in local area of large space and robot and cloud server implementing thereof
Bohlmann et al. Autonomous person following with 3D LIDAR in outdoor environment
Caley et al. Data-driven comparison of spatio-temporal monitoring techniques
Liao et al. TSM: Topological scene map for representation in indoor environment understanding
CN111975775B (en) Autonomous robot navigation method and system based on multi-angle visual perception
KR102420090B1 (en) Method of drawing map by identifying moving object and robot implementing thereof
Visser et al. Amsterdam Oxford Joint Rescue Forces-Team Description Paper-Virtual Robot competition-Rescue Simulation League-RoboCup 2008
Wang et al. Path planning model of mobile robots in the context of crowds
CN117054444B (en) Method and system for pipeline detection
Laugier et al. Steps towards safe navigation in open and dynamic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant