CN114740851A - Abnormity recovery method and device, robot and storage medium - Google Patents

Abnormity recovery method and device, robot and storage medium Download PDF

Info

Publication number
CN114740851A
CN114740851A CN202210364846.8A CN202210364846A CN114740851A CN 114740851 A CN114740851 A CN 114740851A CN 202210364846 A CN202210364846 A CN 202210364846A CN 114740851 A CN114740851 A CN 114740851A
Authority
CN
China
Prior art keywords
robot
virtual wall
label
navigation
position point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210364846.8A
Other languages
Chinese (zh)
Inventor
陈军
周冲
万永辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202210364846.8A priority Critical patent/CN114740851A/en
Publication of CN114740851A publication Critical patent/CN114740851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an abnormality recovery method, an abnormality recovery device, a robot and a storage medium. The method comprises the following steps: when the robot navigation is abnormal and a first label image of a preset type label is acquired, acquiring a first position point corresponding to the label; acquiring a second position point corresponding to the current position of the robot; determining the relative position of the label and a preset virtual wall according to the first position point and the second position point; and controlling the robot to recover normal navigation according to the relative position. By operating the technical scheme provided by the embodiment of the invention, the problem that when the navigation state of the robot is interrupted and the robot cannot move continuously, the moving path is planned according to the default label positioned in the virtual wall, and when the label is actually positioned outside the virtual wall, the normal navigation function cannot be recovered can be solved, and the beneficial effects of improving the success rate and reliability of abnormal recovery are achieved.

Description

Abnormity recovery method and device, robot and storage medium
Technical Field
The present invention relates to robotics, and in particular, to an abnormality recovery method, apparatus, robot, and storage medium.
Background
At present, more and more robots begin to enter human social life to replace part of human labor, and particularly, the robots are more popularized and diversified in application in the service industry. The positioning by adopting the label is one of the navigation positioning modes of the robot, and the robot can passively leave the originally planned moving path due to abnormal reasons such as artificial pushing and the like in the moving process, so that the navigation state of the robot is interrupted.
The problem can be solved by determining a label corresponding to the current position of the robot according to the acquired image data when the robot triggers to resume navigation, and planning a moving path of the robot from the current position to the label position of the label when the current position of the robot is outside a preset virtual wall. However, in the above manner, the default labels are all located inside the virtual wall, and it is not considered that the labels may be actually located outside the virtual wall, so that the normal navigation function cannot be recovered.
Disclosure of Invention
The invention provides an abnormality recovery method, an abnormality recovery device, a robot and a storage medium, which are used for improving the success rate and the reliability of abnormality recovery.
According to an aspect of the present invention, there is provided an abnormality recovery method applied to a robot, the method including:
when the robot navigation is abnormal and a first label image of a preset type label is acquired, acquiring a first position point corresponding to the label;
acquiring a second position point corresponding to the current position of the robot;
determining the relative position of the label and a preset virtual wall according to the first position point and the second position point;
and controlling the robot to recover normal navigation according to the relative position.
According to another aspect of the present invention, there is provided an abnormality recovery apparatus arranged in a robot, the apparatus including:
the robot navigation system comprises a first position point acquisition module, a second position point acquisition module and a third position point acquisition module, wherein the first position point acquisition module is used for acquiring a first label image of a preset type label when the robot navigation is abnormal;
the second position point acquisition module is used for acquiring a second position point corresponding to the current position of the robot;
the relative position determining module is used for determining the relative position of the label and a preset virtual wall according to the first position point and the second position point;
and the navigation recovery control module is used for controlling the robot to recover normal navigation according to the relative position.
According to another aspect of the present invention, there is provided a robot including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of recovering from an exception as set forth in any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the method for recovering from an exception according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, when the robot navigation is abnormal and a first label image of a preset type label is acquired, a first position point corresponding to the label is acquired; acquiring a second position point corresponding to the current position of the robot; determining the relative position of the label and a preset virtual wall according to the first position point and the second position point; and controlling the robot to recover normal navigation according to the relative position. According to the technical scheme, the relative position of the label and the virtual wall is judged firstly, and then the robot is controlled to recover normal navigation according to the relative position in a targeted manner, so that the problem that the normal navigation function cannot be recovered due to the fact that the default label is located in the virtual wall and the label is actually located outside the virtual wall can be solved, and the beneficial effects of improving the success rate and reliability of abnormal recovery are achieved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
Fig. 1 is a flowchart of an exception recovery method according to an embodiment of the present invention;
fig. 2 is a flowchart of an exception recovery method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a second embodiment of a position detection method according to the present invention;
fig. 4 is a schematic structural diagram of an abnormality recovery apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a robot for implementing an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," "target," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of an abnormality recovery method according to an embodiment of the present invention, where this embodiment is applicable to a situation where a robot recovers normal navigation after a navigation abnormality, and the method may be executed by an abnormality recovery apparatus provided in an embodiment of the present invention, where the apparatus may be implemented by software and/or hardware, and the apparatus may be integrated in a robot. Referring to fig. 1, the method for recovering from an exception provided in this embodiment includes:
s110, when the robot navigation is abnormal and a first label image of a preset type label is acquired, acquiring a first position point corresponding to the label.
The abnormal robot navigation may be a situation in which the navigation state is abnormal, such as an interruption of the robot navigation. The abnormal cause of the navigation abnormality may be caused by a person, or may be caused by a system internal fault, for example, the robot is pushed by a person to an area where the path planning cannot be performed normally, or the robot travels abnormally to an area where the path planning cannot be performed normally, and the like, which is not limited in this embodiment.
The image data of the current position can be collected through the image collecting equipment of the robot, the label image is determined from the image data, and the label type corresponding to the label image is determined so as to collect the label image of the preset type. Each label image can correspond to a preset collectable range, namely when the robot is positioned in the preset collectable range, the label image can be collected. The label is a road sign which is fixed in position in advance, for example, is pasted on an indoor ceiling and is used for the robot to carry out indoor positioning and navigation according to the identification information of the label and the position information of the label. Identification information and position information of each tag, such as an ID and map coordinates, may be recorded in the navigation map. Specifically, a tag may be composed of a plurality of circular identification arrangements.
The label type can comprise no label, an illegal label, a legal label and the like, and when the label type is no label, namely when the image acquisition equipment does not acquire the label, the image data does not contain the label data; when the type of the label is an illegal label, the current label can be indicated to be a label which is not recorded in the current navigation map, namely the image data contains label image data, but when the robot identifies the label image data, the position information corresponding to the label cannot be obtained; when the type of the tag is a legal tag, it indicates that the current tag is a tag recorded in the current navigation map, that is, the image data includes tag image data, and when the robot identifies the tag image data, tag position information corresponding to the tag can be obtained, and the tag position information may be latitude and longitude information of a position where the tag is located or coordinates in the navigation map, and the like.
The first label image of the preset type label can be a label image of a legal label, and when the first label image of the legal label is acquired, the first label image data is identified, so that the first position point corresponding to the label can be obtained according to the label position information contained in the first label image.
And S120, acquiring a second position point corresponding to the current position of the robot.
The second position point is a current position of the robot, and may be determined according to information such as a single first position point, an installation height of the image capturing device, a capturing angle, and a height of the label, which is not limited in this embodiment.
In this embodiment, optionally, the obtaining a second position point corresponding to the current position of the robot includes:
judging whether other label images of the preset type labels except the first label image exist in the image acquisition range of the robot or not;
and if so, determining the second position point according to the first label image and the other label images.
The image capturing range of the robot may be a preset capturing range of an image capturing device of the robot, for example, a 360 ° shooting range along the circumferential direction of the robot. One or more image acquisition devices can be arranged at a preset position of the robot, and an acquisition range corresponding to a single image acquisition device or an acquisition range set corresponding to a plurality of image acquisition devices is used as a preset acquisition range. The preset position may be the top of the robot, in which case the tag may be located on the ceiling of the room.
And judging whether other label images of the preset type labels except the first label image exist in the image acquisition range of the robot, for example, whether images of other legal labels except the first label image are acquired or not.
If the label image exists, the corresponding position point of each label image is identified from the first label image and the other label images, and the position points are jointly calculated by a method such as triangulation to obtain a second position point.
The error generated when the second position point is determined according to the single first position point is avoided, and the accuracy of determining the second position point is improved, so that the accuracy of subsequently determining the relative position of the label and the preset virtual wall is improved.
S130, determining the relative position of the label and a preset virtual wall according to the first position point and the second position point.
The preset virtual wall is a predetermined virtual wall, for example, a virtual wall set in the robot mapping process can be used for separating a legal driving area and an illegal driving area of the robot. The relative position of the label and the preset virtual wall can be that the label is positioned in the preset virtual wall or the label is positioned outside the preset virtual wall.
The relative position between the tag and the preset virtual wall is determined according to the first position point and the second position point, and when the robot can acquire a legal tag but the navigation is still abnormal, the relative position relationship between the first position point and the preset virtual wall can be determined according to the second position point, for example, whether the connection line intersects with the preset virtual wall or not is determined by the connection line between the first position point and the second position point, which is not limited in this embodiment.
And S140, controlling the robot to recover normal navigation according to the relative position.
According to different relative positions, the robot can be controlled in different modes to recover normal navigation. Illustratively, when the relative position is that the label is located in the preset virtual wall, the robot can be controlled to return to the original planned path of the robot before the navigation state in the virtual wall is abnormal according to the first position point, and normal navigation is recovered. Because the robot needs to be positioned through the first position point, the robot is controlled through the first position point to recover the navigation without additionally acquiring other position points, and the robot can quickly recover the normal navigation. The relative position is that when the tag is located outside the preset virtual wall, other position information can be acquired, and the robot is controlled to return to the original planned path in the virtual wall to resume normal navigation according to the other position information, which is not limited in this embodiment.
When setting labels, in order to improve the accuracy of robot positioning, many labels are often set. In an illegal area of the robot, i.e. outside the virtual wall, although the default robot cannot travel in this area, a tag may be set to help the robot perform positioning. The positioning method is used for ensuring accurate positioning of the robot and avoiding the problems that labels in partial legal areas are too dense and not attractive, and the partial legal areas possibly do not have the condition of adhering the labels, such as arrangement of lamps, ornaments and the like.
According to the technical scheme provided by the embodiment, when the condition that the label is located outside the preset virtual wall is considered, when the robot navigation is abnormal and the first label image of the preset type label is acquired, the first position point corresponding to the label is acquired; acquiring a second position point corresponding to the current position of the robot; and determining the relative position of the label and the preset virtual wall according to the first position point and the second position point.
And controlling the robot to recover the normal navigation according to the relative position, so as to avoid the problem that the normal navigation function cannot be recovered if the label is still positioned in the virtual wall according to an abnormal recovery method when the label is positioned outside the virtual wall. And implementing a corresponding mode of recovering the normal navigation of the robot according to different relative positions, thereby improving the success rate and reliability of abnormal recovery.
Example two
Fig. 2 is a flowchart of an abnormality recovery method according to a second embodiment of the present invention, and this technical solution is supplementary explained with respect to a process of determining a relative position between the tag and a preset virtual wall according to the first position point and the second position point. Compared with the scheme, the scheme is specifically optimized in that the relative position of the label and the preset virtual wall is determined according to the first position point and the second position point, and the method comprises the following steps:
determining a first connecting line according to the first position point and the second position point;
judging whether an intersection point exists between the first connecting line and a virtual wall side line corresponding to the preset virtual wall;
if the label does not exist, determining that the relative position of the label and the preset virtual wall is that the label is positioned on the outer side of the preset virtual wall. Specifically, the flow chart of the exception recovery method is shown in fig. 2:
s210, when the robot navigation is abnormal and a first label image of a preset type label is acquired, acquiring a first position point corresponding to the label.
And S220, acquiring a second position point corresponding to the current position of the robot.
And S230, determining a first connecting line according to the first position point and the second position point.
The first connection line AB may be formed by connecting the first position point and the second position point on the basis of a navigation map of the robot.
S240, judging whether the first connecting line and a virtual wall sideline corresponding to the preset virtual wall have an intersection point.
The virtual wall edge line corresponding to the preset virtual wall may be a line obtained by fitting data points corresponding to the preset virtual wall. Can be a straight line or a curve set in the mapping process.
Whether the intersection point exists between the first connection line and the virtual wall side line corresponding to the preset virtual wall is judged, whether the intersection point exists can be calculated through a formula of the first connection line and the virtual wall side line, and whether the intersection point exists between the first connection line and the virtual wall side line in the map can be determined through drawing the first connection line and the virtual wall side line in the navigation map.
And S250, if the label does not exist, determining that the relative position of the label and the preset virtual wall is that the label is positioned on the outer side of the preset virtual wall.
If the intersection point does not exist, namely the label and the robot are located on the same side of the preset virtual wall, and the robot is located on the outer side of the preset virtual wall, the relative position of the label and the preset virtual wall is determined to be that the label is located on the outer side of the preset virtual wall.
In this embodiment, optionally, if the virtual wall exists, it is determined that the tag is located on the inner side of the preset virtual wall.
If the label exists, determining that the relative position of the label and the preset virtual wall is that the label is positioned on the inner side of the preset virtual wall.
If the intersection point exists, namely the label and the robot are located on two sides of the preset virtual wall, and the robot is located on the outer side of the preset virtual wall, determining the relative position of the label and the preset virtual wall as the label is located on the inner side of the preset virtual wall.
And S260, controlling the robot to recover normal navigation according to the relative position.
In this embodiment, optionally, the controlling the robot to resume normal navigation according to the relative position includes:
determining a second virtual wall area according to a second intersection point of the first connecting line and the virtual wall sideline;
and releasing the limit of the second virtual wall area, and controlling the robot to reach the inner side of the preset virtual wall through the second virtual wall area to recover normal navigation.
The intersection point of the first connection line and the virtual wall side line is recorded as a second intersection point, a second virtual wall area is determined according to the second intersection point, the second intersection point can be used as a center, the area of the virtual wall in a preset range is determined as the second virtual wall area, and the size of the preset range can be determined according to the size of the robot and the position of the image acquisition device, so that the robot can move smoothly through the second virtual wall area subsequently, and the embodiment does not limit the movement. Considering that the virtual wall connecting line may be a curve, in order to ensure that the robot passes through smoothly, a part of the virtual wall side line with the length being 1.5 times to 2 times of the width of the robot may be used as the virtual wall area. For example, if the robot is 80cm wide and the image capturing device is disposed at the center of the robot width, the virtual wall sidelines with two sides each 60cm long may be used as the virtual wall area with the second intersection point as the center point, that is, the total length of the virtual wall area is 120cm, which is 1.5 times the robot width. If the image acquisition device is not arranged in the center of the width of the robot, the length of the virtual wall area can be twice of the width of the robot, and the robot can pass smoothly.
The restriction of the second virtual wall area is released, so that the robot can move from the outside of the virtual wall to the inside of the virtual wall through the second virtual wall area, and the returning mode may be to control the robot to move to the first position point according to the first connection line, which is not limited in this embodiment. If the virtual wall is provided with the obstacle avoidance distance, in the process that the robot returns to the inner side of the virtual wall, the obstacle avoidance distance limit can be removed, and the situation that the robot is too close to the unremoved virtual wall to trigger the obstacle avoidance and cannot smoothly return to the inside of the virtual wall is avoided. And then, normal navigation is recovered on the inner side of the virtual wall, the navigation state which is interrupted in the inner side of the preset virtual wall originally is continued, for example, the current moving path is planned and moved according to the first position and the destination position corresponding to the originally planned path before the navigation state is abnormal, and the originally planned path does not need to be returned, so that the moving efficiency of the robot is improved.
The first connecting line determined by the first position point and the second position point and the virtual wall sideline determine the second virtual wall area and remove the limitation of the second virtual wall area, so that when the label is positioned on the inner side of the preset virtual wall, the passing area of the robot reaching the inner side of the preset virtual wall from the outer side of the preset virtual wall can be directly determined according to the label, the efficiency is high, normal navigation can be recovered on the inner side of the preset virtual wall in the subsequent process, and the abnormal recovery efficiency is improved.
In this embodiment, optionally, the controlling the robot to reach the inner side of the preset virtual wall through the second virtual wall area to resume normal navigation includes:
controlling the robot to move to the second intersection point;
planning a navigation path according to the second intersection point and the task destination position of the robot;
and controlling the robot to move according to the navigation path, and recovering the limitation of the second virtual wall area.
The robot may be controlled along the first connection line to move from the second position point to the second intersection point, that is, the first connection line is taken as the travel path of the robot, which is not limited in the present embodiment.
And planning a navigation path according to the second intersection point and the task target position of the robot, wherein the task target position of the robot is the position of a destination corresponding to the task executed before the robot navigation is abnormal. And controlling the robot to move to a task destination position according to the planned navigation path.
And planning a navigation path according to the second intersection point and the task destination position of the robot, and controlling the robot to move according to the navigation path, so that the normal navigation can be recovered after the robot returns to the inner side of the preset virtual wall, the interrupted task before the navigation abnormality is continuously completed, and the abnormality recovery efficiency and the task execution efficiency are improved. And the restriction of the second virtual wall area is recovered, namely the second virtual wall area is used as a temporary passing area, so that the influence of long-time restriction removal on subsequent movement of the robot is avoided, and the influence of abnormal recovery on the operation of the robot is reduced.
In this embodiment, optionally, the controlling the robot to resume normal navigation according to the relative position includes:
determining a target navigation position point from candidate navigation position points according to the first position point;
determining a second connecting line according to the second position point and the target navigation position point;
determining a first virtual wall area according to a first intersection point of the second connecting line and the virtual wall sideline;
and releasing the limitation of the first virtual wall area, and controlling the robot to reach the inner side of the preset virtual wall through the first virtual wall area to recover normal navigation.
The navigation position points are moving path points determined in the process of building the navigation map, so that the robot can conveniently plan paths according to the moving path points, and the moving path points are used for passing and stopping when the robot actually runs, and are all positioned in the preset virtual wall. The candidate navigation position points may be all navigation position points, the target navigation position point is determined from the candidate navigation position points according to the first position point, and the candidate navigation position point closest to the first position point may be determined as the target navigation position point.
And connecting the second position point and the target navigation position point to determine a second connecting line, wherein the target navigation position point is positioned in the virtual wall, so that an intersection point exists between the second connecting line and the virtual wall side line, and the intersection point is recorded as a first intersection point. The first virtual wall area is determined according to the first intersection point, where the area of the virtual wall in the preset range is determined as the first virtual wall area by taking the first intersection point as a center, and the size of the preset range may be determined according to the size of the robot, so that the robot can subsequently move smoothly through the first virtual wall area, which is not limited in this embodiment.
The restriction of the first virtual wall area is released so that the robot can move from the outside of the virtual wall to the inside of the virtual wall through the first virtual wall area.
The returning mode can be that the robot runs into the virtual wall according to the second connecting line, the robot can also obtain the moving path backup information of the robot, and the returning path is reversely obtained according to the moving path backup information so as to move to the inner side of the virtual wall according to the returning path, at this time, the range of the first virtual wall area can be determined according to the path backup information, and the limitation of the first virtual wall area can be recovered after returning.
The moving path backup information is route information recorded in the historical moving process of the robot. Alternatively, after the robot is started, position information of a passing position in the moving process of the robot may be recorded in real time by a device such as a gyroscope sensor on the robot and stored as the backup information of the moving path, where the position information may be coordinate point information in a map coordinate system of a navigation map stored by the robot, and this embodiment is not limited thereto.
The return path may be planned according to the content of the backup information of the movement path and the recording sequence of the information, for example, if the backup information of the movement path recorded when the slave robot is started is point information a0A1A2...AnThen can be according to AnAn- 1An-2...A0The moving path is planned in the point location sequence, so that the robot returns to the inner side of the preset virtual wall from the current position original path. It should be noted that when the virtual wall returns to the inside of the preset virtual wall, the required backup information of the moving path can be determined according to the return requirement, and the backup information of all the moving paths does not need to be used. Illustratively, the return path is ended if the inner side of the preset virtual wall is reached. For example, if An——An-4Located outside of the predetermined virtual wall, An-5Is located at the inner side of the preset virtual wall, the robot can only need to move from AnStart moving to An-5And according to An-6And An-5Determining a first virtual wall area without moving to A1
And planning a return path according to the backup information of the moving path, so that the robot can return to an area close to the area where the navigation abnormality occurs, continue the navigation state interrupted in the inner side of the preset virtual wall originally, and improve the working efficiency of the robot. The moving path may be planned according to the return position and the target position to resume normal navigation, where the return position may be a position in the originally planned path or a position inside the virtual wall other than the originally planned path, which is not limited in this embodiment.
If the return position is the position in the original planning path, the mobile terminal can continue to move according to the original planning path, and the calculation force is saved. If the position after the arrival is the position except the original planned path in the path planning region, the current moving path can be planned and moved according to the destination position corresponding to the arrival position and the original moving path without returning to the original planned path, so that the moving efficiency of the robot is improved.
The first virtual wall area is determined through the target navigation position point and the second position point, and the limitation of the first virtual wall area is removed, so that when the label is located on the outer side of the preset virtual wall, the robot can be controlled to return to the inner side of the preset virtual wall from the outer side of the preset virtual wall through the first virtual wall area, abnormal recovery failure when the label is located on the outer side of the preset virtual wall is avoided, and the success rate of the abnormal recovery is improved.
Fig. 3 is a schematic position diagram according to a second embodiment of the present invention, as shown in fig. 3, a virtual wall edge line 31 is determined according to data points 311 in a preset virtual wall. A preset virtual wall interior is located between the two virtual wall edge lines 31. The navigation position points 331 are located inside the preset virtual wall, the corresponding moving path 33 is planned according to each navigation position point 331, the tag 32a is located inside the preset virtual wall, and the tag 32b is located outside the preset virtual wall.
The embodiment of the invention determines a first connecting line according to a first position point and a second position point; judging whether the first connecting line and a virtual wall side line corresponding to a preset virtual wall have an intersection point or not; if the label does not exist, determining the relative position of the label and the preset virtual wall as that the label is positioned on the outer side of the preset virtual wall so as to determine the relative position relation of the label and the preset virtual wall and improve the pertinence of label position determination; the method is convenient for determining the mode of recovering the normal navigation of the follow-up control robot under the condition that the label is located on the outer side of the preset virtual wall, avoids the problem that the normal navigation function cannot be recovered due to the fact that the default label is located in the virtual wall and abnormal recovery is carried out according to the abnormal recovery method that the label is located in the virtual wall, and improves the success rate of abnormal recovery.
EXAMPLE III
Fig. 4 is a schematic structural diagram of an abnormality recovery apparatus according to a third embodiment of the present invention. The device can be realized in a hardware and/or software mode, is configured on a robot, can execute the abnormity recovery method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 4, the apparatus includes:
a first position point obtaining module 410, configured to obtain a first position point corresponding to a preset type tag when the robot is abnormally navigated and a first tag image of the tag is acquired;
a second position point obtaining module 420, configured to obtain a second position point corresponding to the current position of the robot;
a relative position determining module 430, configured to determine a relative position between the tag and a preset virtual wall according to the first position point and the second position point;
and a navigation recovery control module 440, configured to control the robot to recover normal navigation according to the relative position.
On the basis of the above technical solutions, optionally, the relative position determining module includes:
a first connection line determining unit configured to determine a first connection line according to the first position point and the second position point;
the intersection point existence judging unit is used for judging whether an intersection point exists between the first connecting line and a virtual wall side line corresponding to the preset virtual wall;
and the first relative position determining unit is used for determining that the relative position of the label and the preset virtual wall is that the label is positioned at the outer side of the preset virtual wall if the intersection existence judging unit judges that the intersection exists.
On the basis of the above technical solutions, optionally, the navigation recovery control module includes:
the target navigation position point determining unit is used for determining a target navigation position point from the candidate navigation position points according to the first position point;
the second connecting line determining unit is used for determining a second connecting line according to the second position point and the target navigation position point;
the first virtual wall area determining unit is used for determining a first virtual wall area according to a first intersection point of the second connecting line and the virtual wall sideline;
and the first navigation recovery control unit is used for removing the limitation of the first virtual wall area and controlling the robot to reach the inner side of the preset virtual wall through the first virtual wall area to recover normal navigation.
On the basis of the above technical solutions, optionally, the apparatus further includes:
and the second relative position determining unit is used for determining that the relative position of the label and the preset virtual wall is that the label is positioned at the inner side of the preset virtual wall if the intersection point existence judging unit judges that the intersection point exists.
On the basis of the above technical solutions, optionally, the navigation recovery control module includes:
a second virtual wall area determining unit, configured to determine a second virtual wall area according to a second intersection point of the first connection line and the virtual wall edge line;
and the second navigation recovery control unit is used for removing the limitation of the second virtual wall area and controlling the robot to reach the inner side of the preset virtual wall through the second virtual wall area to recover normal navigation.
On the basis of the above technical solutions, optionally, the second navigation recovery control unit includes:
a first movement control subunit, configured to control the robot to move to the second intersection;
the navigation path planning unit is used for planning a navigation path according to the second intersection point and the task destination position of the robot;
and the second movement control subunit is used for controlling the robot to move according to the navigation path and recovering the limitation of the second virtual wall area.
On the basis of the foregoing technical solutions, optionally, the second location point obtaining module includes:
a label image existence judging unit, configured to judge whether other label images of the preset type labels other than the first label image exist within an image acquisition range of the robot;
and a second position point determining unit configured to determine the second position point according to the first label image and the other label images if the label image presence determining unit determines that the label image is present.
Example four
Fig. 5 shows a schematic structural diagram of a robot 10 that can be used to implement an embodiment of the invention. A robot is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. A robot may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the robot 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the robot 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the robot 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the robot 10 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as an exception recovery method.
In some embodiments, the exception recovery method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the robot 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the above-described exception recovery method may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the exception recovery method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described herein may be implemented on a robot having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the robot. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An abnormality recovery method applied to a robot, comprising:
when the robot navigation is abnormal and a first label image of a preset type label is acquired, acquiring a first position point corresponding to the label;
acquiring a second position point corresponding to the current position of the robot;
determining the relative position of the label and a preset virtual wall according to the first position point and the second position point;
and controlling the robot to recover normal navigation according to the relative position.
2. The method of claim 1, wherein determining the relative position of the tag to a predetermined virtual wall based on the first location point and the second location point comprises:
determining a first connecting line according to the first position point and the second position point;
judging whether an intersection point exists between the first connecting line and a virtual wall side line corresponding to the preset virtual wall;
if the label does not exist, determining that the relative position of the label and the preset virtual wall is that the label is positioned on the outer side of the preset virtual wall.
3. The method of claim 2, wherein controlling the robot to resume normal navigation based on the relative position comprises:
determining a target navigation position point from candidate navigation position points according to the first position point;
determining a second connecting line according to the second position point and the target navigation position point;
determining a first virtual wall area according to a first intersection point of the second connecting line and the virtual wall sideline;
and releasing the limitation of the first virtual wall area, and controlling the robot to reach the inner side of the preset virtual wall through the first virtual wall area to recover normal navigation.
4. The method of claim 2, further comprising:
if the label exists, determining that the relative position of the label and the preset virtual wall is that the label is positioned on the inner side of the preset virtual wall.
5. The method of claim 4, wherein controlling the robot to resume normal navigation based on the relative position comprises:
determining a second virtual wall area according to a second intersection point of the first connecting line and the virtual wall sideline;
and releasing the limit of the second virtual wall area, and controlling the robot to reach the inner side of the preset virtual wall through the second virtual wall area to recover normal navigation.
6. The method of claim 5, wherein controlling the robot to resume normal navigation through the second virtual wall area to the inside of the preset virtual wall comprises:
controlling the robot to move to the second intersection point;
planning a navigation path according to the second intersection point and the task destination position of the robot;
and controlling the robot to move according to the navigation path, and recovering the limitation of the second virtual wall area.
7. The method of claim 1, wherein obtaining a second location point corresponding to the current location of the robot comprises:
judging whether other label images of the preset type labels except the first label image exist in the image acquisition range of the robot or not;
and if so, determining the second position point according to the first label image and the other label images.
8. An abnormality recovery device disposed in a robot, comprising:
the robot navigation system comprises a first position point acquisition module, a second position point acquisition module and a third position point acquisition module, wherein the first position point acquisition module is used for acquiring a first label image of a preset type label when the robot navigation is abnormal;
the second position point acquisition module is used for acquiring a second position point corresponding to the current position of the robot;
the relative position determining module is used for determining the relative position of the label and a preset virtual wall according to the first position point and the second position point;
and the navigation recovery control module is used for controlling the robot to recover normal navigation according to the relative position.
9. A robot, characterized in that the robot comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of exception recovery of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a processor to implement the method of recovering from an exception as recited in any one of claims 1-7 when the computer instructions are executed.
CN202210364846.8A 2022-04-07 2022-04-07 Abnormity recovery method and device, robot and storage medium Pending CN114740851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210364846.8A CN114740851A (en) 2022-04-07 2022-04-07 Abnormity recovery method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210364846.8A CN114740851A (en) 2022-04-07 2022-04-07 Abnormity recovery method and device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN114740851A true CN114740851A (en) 2022-07-12

Family

ID=82278905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210364846.8A Pending CN114740851A (en) 2022-04-07 2022-04-07 Abnormity recovery method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114740851A (en)

Similar Documents

Publication Publication Date Title
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN110706509B (en) Parking space and direction angle detection method, device, equipment and medium thereof
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
KR20190088824A (en) Robotic vacuum cleaner and method for controlling thereof
CN112083728A (en) Parking method, device, equipment and storage medium for driving equipment
CN113029129A (en) Method, device, storage medium and program product for determining positioning information of vehicle
CN113733086B (en) Travel method, device and equipment of robot and storage medium
CN113392794A (en) Vehicle over-line identification method and device, electronic equipment and storage medium
CN110866504A (en) Method, device and equipment for acquiring marked data
CN114625147A (en) Moving path planning method and device, robot and storage medium
CN113895457A (en) Method, device and equipment for controlling running state of vehicle and automatic driving vehicle
CN116499487B (en) Vehicle path planning method, device, equipment and medium
CN114740851A (en) Abnormity recovery method and device, robot and storage medium
CN114299192B (en) Method, device, equipment and medium for positioning and mapping
CN112284403B (en) Positioning method, positioning device, electronic equipment and storage medium
CN114578821A (en) Mobile robot, method for overcoming difficulty of mobile robot, and storage medium
CN112507957A (en) Vehicle association method and device, road side equipment and cloud control platform
CN115546348B (en) Robot mapping method and device, robot and storage medium
CN114379588B (en) Inbound state detection method, apparatus, vehicle, device and storage medium
CN113963326A (en) Traffic sign detection method, device, equipment, medium and automatic driving vehicle
CN115056238A (en) Robot leading method and device, robot and storage medium
CN114545938A (en) Path planning method and device, robot and storage medium
CN116700251A (en) Automatic control method, device, robot and medium
CN115876210A (en) Map data generation method and device, electronic equipment and storage medium
CN118258406A (en) Automatic guided vehicle navigation method and device based on visual language model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination