CN115040038A - Robot control method and device and robot - Google Patents

Robot control method and device and robot Download PDF

Info

Publication number
CN115040038A
CN115040038A CN202210712326.1A CN202210712326A CN115040038A CN 115040038 A CN115040038 A CN 115040038A CN 202210712326 A CN202210712326 A CN 202210712326A CN 115040038 A CN115040038 A CN 115040038A
Authority
CN
China
Prior art keywords
dynamic object
area
information
target
preset task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210712326.1A
Other languages
Chinese (zh)
Inventor
江建文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202210712326.1A priority Critical patent/CN115040038A/en
Publication of CN115040038A publication Critical patent/CN115040038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the invention provides a robot control method and device and a robot, and relates to the technical field of robots. The method comprises the following steps: when a target robot executes a preset task on a designated area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed; after the target robot completes a preset task on a designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points; and controlling the robot to move along the target path and executing a preset task. Compared with the related art, the cleaning effect of the robot can be enhanced by applying the scheme provided by the embodiment of the invention.

Description

Robot control method and device and robot
Technical Field
The invention relates to the technical field of robots, in particular to a robot control method and device and a robot.
Background
As the functions of the cleaning robot become more and more sophisticated, more and more users choose to use the cleaning robot for cleaning the floor. For example, in a home, a user cleans a floor of a living room using a cleaning robot; in a shopping mall, a user cleans the floor and the like of each floor of the shopping mall using a cleaning robot.
In the related art, a cleaning robot generally cleans a designated area according to a user's instruction, for example, when a user issues an instruction to the cleaning robot to clean the floor of a kitchen, the cleaning robot cleans the floor of the kitchen according to the user's instruction; when a user gives an instruction to the cleaning robot to clean the floor at a designated position in the workshop, the cleaning robot cleans the floor at the designated position in the workshop and the like in accordance with the user instruction.
However, when the cleaning robot cleans a designated area, the cleaned area may be contaminated again due to movement of people, pets, and the like, and due to different forms of dynamic objects such as people, pets, and the like, the cleaning robot in the prior art cannot effectively distinguish different dynamic objects, and cannot construct an accurate and optimized path when performing secondary cleaning on the area, so that the cleaning path is disordered or redundant, and the cleaning efficiency is low.
Disclosure of Invention
The embodiment of the invention aims to provide a robot control method, a robot control device and a robot, so as to enhance the cleaning effect of the robot. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a robot control method, where the method includes:
when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed;
after the target robot completes the preset task on the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points;
and controlling the target robot to move along the target path and executing the preset task.
Optionally, in a specific implementation manner, the determining each position point through which the dynamic object existing in the sub-region that has completed the preset task passes includes:
and tracking the dynamic object existing in the sub-area which has completed the preset task by utilizing a tracking algorithm to obtain each position point through which the dynamic object passes.
Optionally, in a specific implementation manner, the tracking, by using a tracking algorithm, a dynamic object existing in a sub-region where the preset task is completed to obtain each position point through which the dynamic object passes includes:
acquiring regional scene information acquired according to a preset time interval;
performing dynamic object detection on the scene information of each area to obtain a detection result, and determining the position information of the detected dynamic object when the detection result represents that the dynamic object exists in the sub-area which has completed the preset task;
and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
Optionally, in a specific implementation manner, the method further includes:
determining the characteristic information of the detected dynamic object aiming at the scene information of the first area of the detected dynamic object, and adding an object identifier for the detected dynamic object;
for each area scene information of which the acquisition time is after the acquisition time of the first area scene information, when the obtained detection result represents that a dynamic object exists in the sub-area, determining the target characteristic information of the detected dynamic object, and judging whether specified characteristic information matched with the target characteristic information exists in the characteristic information of the dynamic object detected by the previous area scene information or not;
if the dynamic object exists, adding a specified object identifier for the dynamic object detected by the area scene information; wherein the specified object identification is: the characteristic information is an object identifier added to the dynamic object of the specified characteristic information;
if the dynamic object does not exist in the area scene information, adding a new object identifier for the dynamic object detected by the area scene information;
the determining, based on the determined position information, position points through which the dynamic object passes in the sub-region where the preset task has been completed includes:
and determining each position point through which the dynamic object passes in the sub-area which has completed the preset task based on the object identification added to all the detected dynamic objects and all the determined position information.
Optionally, in a specific implementation manner, the determining, based on the object identifiers added to all the detected dynamic objects and all the determined position information, each position point on which the dynamic object passes in the sub-area where the preset task is completed includes:
performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes the preset task passes based on the plurality of dispersion points.
Optionally, in a specific implementation manner, the constructing a path for each piece of position information of the dynamic object added with the same object identifier to obtain at least one initial path includes:
and for each object identifier, if the number of the position information of the dynamic object added with the object identifier is greater than the preset number, constructing a path for each position information of the dynamic object added with the object identifier to obtain an initial path.
Optionally, in a specific implementation manner, the discretizing, according to a preset discrete step length, each initial path to obtain a plurality of discrete points includes:
and dispersing each initial path with the length larger than the preset length according to a preset dispersion step length to obtain a plurality of dispersion points.
Optionally, in a specific implementation manner, the area scene information includes: image information and/or laser point clouds.
Optionally, in a specific implementation manner, the constructing a target path passing through all the location points based on the determined location relationships of all the location points includes:
constructing a target path which takes the current position of the target robot as a starting point and passes through all the position points based on the determined position relation of all the position points and the communication relation between every two position points in all the position points;
wherein, the communication relation is as follows: a relationship determined based on the presence of an obstacle between each two location points.
Optionally, in a specific implementation manner, the target robot is a cleaning robot, and the preset task is a cleaning task.
In a second aspect, an embodiment of the present invention provides a robot control apparatus, including:
the system comprises a sensor, a controller and a controller, wherein the sensor is used for acquiring the movement information of a target robot and the spatial information of the environment where the target robot is located when the target robot executes a preset task on a specified area;
the processor is used for determining a sub-region which has completed the preset task according to the movement information of the target robot acquired by the sensor and the space information of the environment where the target robot is located; when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed; after the target robot completes the preset task to the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points; and controlling the target robot to move along the target path and executing the preset task.
Alternatively, in one particular implementation,
the sensor is also used for acquiring the regional scene information of the target robot in the moving process;
the processor is specifically configured to acquire region scene information acquired at preset time intervals, perform dynamic object detection on each region scene information to obtain a detection result, and determine position information of a detected dynamic object when the detection result indicates that the dynamic object exists in a sub-region where the preset task is completed; and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
Optionally, in a specific implementation manner, the processor is specifically configured to:
performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes the preset task passes based on the plurality of dispersion points.
In a third aspect, an embodiment of the present invention provides a robot, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of any one of the method embodiments when executing the program stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of implementing any of the above method embodiments.
In a fifth aspect, embodiments of the present invention also provide a computer program product comprising instructions, which when run on a computer, cause the computer to perform the steps of any of the above-described method embodiments.
The embodiment of the invention has the following beneficial effects:
as can be seen from the above, by applying the scheme provided by the embodiment of the present invention, when the target robot executes the preset task on the designated area, each position point through which the dynamic object passes in the sub-area where the preset task has been completed can be determined; therefore, after the target robot completes the preset task to the designated area, a target path passing through all the position points can be constructed based on the determined position relation of all the position points; and then, controlling the robot to move along the target path and executing a preset task.
Based on this, by applying the solution provided by the embodiment of the present invention, after completing the preset task for the specified area, the target robot may execute the preset task again for the passing position point of each dynamic object in the sub-area where the preset task has been completed in the specifying process of the preset task, so as to avoid a situation that the execution result of the preset task for the passing position point of the dynamic object is damaged and the execution effect required by the preset task cannot be achieved due to the movement of the dynamic object, so that the preset task may be executed again for the passing position point of the dynamic object, and each position in the specified area may achieve the execution effect required by the preset task.
When the cleaning robot executes a cleaning task on a designated area, each position point passed by a person, a pet and the like in the cleaned sub-area can be recorded, after the cleaning robot finishes the cleaning task on the designated area, the cleaning robot is controlled to move along a target path constructed by all the determined position points, and each recorded position point passed by the person, the pet and the like is secondarily cleaned to remove secondary pollution caused by the movement of the person, the pet and the like on the cleaned area. Therefore, the cleaning effect of the cleaning robot can be enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by referring to these drawings.
Fig. 1 is a schematic flow chart of a robot control method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an embodiment of determining position information of a dynamic object according to the present invention;
FIG. 3 is a diagram illustrating another embodiment of determining location information of a dynamic object according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating another embodiment of determining location information of a dynamic object according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an embodiment of determining discrete points according to the present invention;
FIG. 6 is a diagram illustrating an embodiment of constructing a target path according to the present invention;
fig. 7 is a schematic structural diagram of a robot control device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In the related art, a robot generally cleans a designated area according to a user's instruction, for example, when the user gives an instruction to the robot to clean the floor of a kitchen, the robot cleans the floor of the kitchen according to the user's instruction; when a user gives an instruction to the robot to clean the floor at a designated position in the workshop, the robot cleans the floor at the designated position in the workshop according to the user instruction.
However, when the cleaning robot cleans a designated area, the cleaned area may be contaminated again due to movement of people, pets, and the like, and due to different forms of dynamic objects such as people, pets, and the like, the cleaning robot in the prior art cannot effectively distinguish different dynamic objects, and cannot construct an accurate and optimized path when performing secondary cleaning on the area, so that the cleaning path is disordered or redundant, and the cleaning efficiency is low.
In order to solve the above technical problem, an embodiment of the present invention provides a robot control method.
The method can be suitable for various application scenes in which the robot needs to be controlled to execute preset tasks, for example, in a household, the robot is controlled to clean a specified area; in a factory, the robot is controlled to patrol a work area, and the like. The method may be applied to a robot, or may be applied to other electronic devices that communicate with the robot, for example, a management server of the robot. Based on this, the embodiment of the present invention does not specifically limit the application scenario and the execution subject of the method.
The robot control method provided by the embodiment of the invention can comprise the following steps:
when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed;
after the target robot completes the preset task on the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points;
and controlling the robot to move along the target path and executing the preset task.
As can be seen from the above, by applying the scheme provided by the embodiment of the present invention, when the target robot executes the preset task on the designated area, each position point through which the dynamic object passes in the sub-area where the preset task has been completed can be determined; therefore, after the target robot completes the preset task to the designated area, a target path passing through all the position points can be constructed based on the determined position relation of all the position points; and then, controlling the robot to move along the target path and executing a preset task.
Based on this, by applying the scheme provided by the embodiment of the present invention, after the target robot completes the preset task for the designated area, the target robot may execute the preset task again for the passing position point of each dynamic object in the sub-area where the preset task has been completed in the designation process of the preset task, so as to avoid a situation that the execution result of the preset task for the passing position point of the dynamic object is damaged and the execution effect required by the preset task cannot be achieved due to the movement of the dynamic object, so that the preset task may be executed again for the passing position point of the dynamic object, and each position in the designated area may achieve the execution effect required by the preset task.
When the cleaning robot executes a cleaning task to a specified area, each position point passed by a person, a pet and the like in the sub-area cleaned by the person, the pet and the like can be recorded, after the cleaning robot finishes the cleaning task to the specified area, the cleaning robot is controlled to move along a target path constructed by all the determined position points, and each recorded position point passed by the person, the pet and the like is secondarily cleaned, so that secondary pollution caused to the cleaned area due to the movement of the person, the pet and the like is eliminated, and the confusion degree of the cleaning path can be reduced and the repetition between the target paths can be reduced by constructing the target path containing all the position points. Therefore, the cleaning efficiency of the cleaning robot can be improved, and the cleaning effect of the cleaning robot on the designated area can be enhanced.
Hereinafter, a robot control method according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a robot control method according to an embodiment of the present invention, and as shown in fig. 1, the method may include the following steps S101 to S103.
S101: when a target robot executes a preset task on a designated area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed;
when the target robot is controlled to execute a preset task on a designated area, the target robot can detect whether a dynamic object exists in a sub-area which has completed the preset task in real time, so that when at least one dynamic object exists in the sub-area, each position point where each dynamic object passes in the sub-area can be determined.
The designated area may be one area or a plurality of areas; it is reasonable that the regions may be regions of any shape in the same plane, or may be multiple regions in different planes, and the embodiments of the present invention are not particularly limited.
For example, the designated area may be three different floors in the same building; or any room in the building; but also a square area in any room, etc.
The preset task can be a cleaning task or a patrol task, which is reasonable; it is reasonable that the target robot may be a cleaning robot or a patrol robot corresponding to the preset task, and the type of the target robot and the type of the preset task are not particularly limited in the embodiment of the present invention.
Optionally, in a specific implementation manner, the target robot is a cleaning robot, and the preset task is a cleaning task.
In this specific implementation, the target robot may be a cleaning robot, and the preset task may be a cleaning task.
That is, when the cleaning robot performs a cleaning task on a designated area, whether a dynamic object exists in a cleaned sub-area or not can be detected in real time, and when at least one dynamic object exists in the sub-area, each position point where each dynamic object passes in the sub-area can be determined.
When determining each position point through which a dynamic object in the sub-region passes, tracking each detected dynamic object in the sub-region based on a tracking algorithm to obtain each position point through which each dynamic object passes in the sub-region; it is reasonable that the area information in the sub-area may also be collected according to a preset specified collection time interval, and the position point of each dynamic object is determined according to the collected area information, so as to obtain each position point through which each dynamic object passes, which is not specifically limited in the embodiment of the present invention.
For clarity, the execution manner of the above step S101 will be illustrated in the following.
S102: after the target robot completes a preset task on a designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points;
when the target robot executes a preset task on the designated area, the sub-area which has completed the preset task continuously increases with the increase of the duration of the target robot executing the preset task, so that the number of the determined position points through which each dynamic object passes in the sub-area can also increase, and thus, when the target robot completes the preset task on the designated area, all the determined position points can be obtained. Further, after the target robot completes the predetermined task to the designated area, a target path passing through all the position points can be constructed based on the positional relationship of all the position points.
The position relationship may include an azimuth relationship and a distance relationship between any two of the above all position points, or may include an azimuth relationship and a distance relationship between any one of the above all position points and the target robot, which are all reasonable, and are not specifically limited in the embodiment of the present invention.
Alternatively, after obtaining all the position points, a target path passing through all the position points may be determined based on the positional relationship of all the position points, with the current position of the target robot as a starting point.
In some cases, there may be an obstacle between two location points, and thus, the construction route is passed through the above-mentioned pointsWhen the target path of the partial position point is determined, the communication relation between every two position points in all the position points needs to be considered
Based on this, optionally, the target path passing through all the position points may be determined based on the position relationship of all the position points and the communication relationship between every two position points in all the position points, with the current position of the target robot as a starting point.
Wherein, the communication relation between every two position points is determined according to the existence condition of the obstacle between every two position points, and when the obstacle exists between the two position points, the communication relation between the two position points is non-communication; when no obstacle exists between the two position points, the two position points are communicated with each other. As shown in fig. 6, an obstacle exists between the point 14 and the point 15, and thus, it can be determined that the connected relationship between the point 14 and the point 15 is non-connected; there is no obstacle between the point 14 and the point 1, and thus, the communication relationship of the point 14 and the point 1 can be determined as being in communication with each other.
Alternatively, a position point having the smallest distance from the current position of the target robot among the position points not determined as the path points of the target path may be determined as the second path point of the target path, with the current position of the target robot as the starting point; then, among the position points which are not determined as the path points of the target path, the position point with the minimum distance with the second path point is determined as the third path point of the target path; according to the method, all the position points are traversed, all the path points of the target path are determined, and the path points are sequentially connected from first to last according to the sequence of each determined path point, so that the target path passing through all the position points is obtained.
Optionally, based on the positional relationship of all the position points and the communication relationship between every two position points in all the position points, taking each position point in all the path points as a starting point, and determining the path point of the target path in each position point which is communicated with the position point and is not determined as the path point of the target path; then, determining the next path point of the target path in each position point which is communicated with the path point and is not determined as the path point of the target path; and sequentially determining all the position points as the path points of the target path, and sequentially connecting all the path points according to the sequence of determining all the path points to construct a plurality of candidate paths passing through all the position points. Then, the path length of each candidate path may be determined, and the candidate path having the shortest path length traveled by the target robot in the process of moving from the current position of the target robot to the start point of each candidate path and then passing through all the position points along the candidate path may be used as the target path.
Of course, the method for constructing the target path listed above is only for illustration and not for limitation, and the method provided by the embodiment of the present invention may be applied when determining a plurality of location points to determine the target path.
S103: and controlling the target robot to move along the target path and executing a preset task.
After the target path is obtained, the target robot can be controlled to move along the target path and execute the preset task.
In this way, after the target path is obtained, the target robot may be controlled to move to each path point of the target path in sequence from the current position of the target robot and to perform the preset task during the movement, so that, when moving to the end point of the target path, the target robot may pass through all the position points and may perform the preset task again for each position point.
For example, when the target robot is a cleaning robot and the preset task is a cleaning task, after the cleaning robot completes the preset task to the designated area, the cleaning robot may be controlled to move from a current position of the cleaning robot to a starting point of the target path, and further, the cleaning robot may be controlled to move along the target path and to perform the cleaning task again. Accordingly, when the cleaning robot moves to the path end point of the target path, the cleaning robot passes through all the position points and performs the cleaning task again for all the position points.
As can be seen from the above, with the solution provided by the embodiment of the present invention, after the target robot completes the preset task for the designated area, the target robot may execute the preset task again for the passing position point of each dynamic object in the sub-area where the preset task has been completed in the designation process of the preset task, so as to avoid a situation that the execution result of the preset task for the passing position point of the dynamic object is damaged and the execution effect required by the preset task cannot be achieved due to the movement of the dynamic object, so that the preset task may be executed again for the passing position point of the dynamic object, and each position in the designated area may achieve the execution effect required by the preset task.
When the cleaning robot executes a cleaning task on a designated area, each position point passed by a person, a pet and the like in the cleaned sub-area can be recorded, after the cleaning robot finishes the cleaning task on the designated area, the cleaning robot is controlled to move along a target path constructed by all the determined position points, and each recorded position point passed by the person, the pet and the like is secondarily cleaned to remove secondary pollution caused by the movement of the person, the pet and the like on the cleaned area. Therefore, the cleaning effect of the cleaning robot can be enhanced.
Next, a specific implementation manner of determining each position point through which a dynamic object existing in the sub-area having completed the preset task passes in step S101 is described as an example.
Optionally, in a specific implementation manner, the step S101 may include the following step 11:
step 11: and tracking the dynamic object existing in the sub-area which has completed the preset task by utilizing a tracking algorithm to obtain each position point through which the dynamic object passes.
In this specific implementation manner, a tracking algorithm may be used to track each dynamic object appearing in the sub-region where the preset task has been completed, and determine each position point through which each dynamic object passes.
The tracking algorithm may be a frame difference method, an optical flow method, or a similarity measurement algorithm, which are all reasonable, and in the embodiment of the present invention, the tracking algorithm is not specifically limited.
Optionally, in a specific implementation manner, the step 11 may include the following steps 111-113:
step 111: acquiring regional scene information acquired according to a preset time interval;
step 112: performing dynamic object detection on the scene information of each area to obtain a detection result, and determining the position information of the detected dynamic object when the detection result represents that the dynamic object exists in the sub-area which has completed the preset task;
step 113: and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
In this specific implementation manner, when the target robot executes the preset task to the designated area, the acquisition device carried by the target robot can acquire the area scene information of the sub-area in which the preset task is completed, which is located in the self acquisition area, in real time according to the self acquisition frequency. Thus, when determining whether a dynamic object exists in the sub-region where the preset task is completed, a preset time interval may be set, and the region scene information collected according to the preset time interval may be acquired.
The preset time interval may be the same as the acquisition time period corresponding to the acquisition frequency of the acquisition device; for example, when the acquisition time period corresponding to the acquisition frequency of the acquisition device is 3 seconds, the preset time interval may be 3 seconds, 6 seconds, or 12 seconds, which is reasonable and not specifically limited in the embodiment of the present invention.
Optionally, when the preset time interval is the same as the acquisition time period corresponding to the acquisition frequency of the acquisition device, acquiring the area scene information according to the preset time interval, where the acquired area scene information is all area scene information acquired by the acquisition device carried by the target robot;
optionally, when the preset time interval is different from the acquisition time period corresponding to the acquisition frequency of the acquisition device, the area scene information is acquired according to the preset time interval, and the acquired area scene information is partial area scene information acquired by the acquisition device carried by the target robot, for example, when the acquisition time period corresponding to the acquisition frequency of the acquisition device is 4 seconds, and the preset time interval may be 8 seconds, after acquiring one area scene information, the next area scene information is acquired at an interval of the acquisition frequency of the acquisition device.
After acquiring the plurality of area scene information of the sub-area having completed the preset task according to the preset time interval, dynamic object detection may be performed on each acquired area scene information, and whether a dynamic object exists in the area scene information is determined, and a detection result is obtained. Therefore, when the detection result indicates that a dynamic object exists in the sub-area where the preset task is completed, the position information of the detected dynamic object can be determined, and further, each position point through which each dynamic object exists in the sub-area where the preset task is completed can be determined based on each determined position information.
When the regional scene information of the sub-region where the preset task is completed is obtained, the acquisition devices carried by the target robot are different, and the obtained regional scene information can be different. For example, the acquisition device carried by the target robot may be an image sensor, a laser radar, an image sensor and a laser radar, and further, the acquired regional scene information may be image information, a laser point cloud, or an image information and a laser point cloud, which are all reasonable and not specifically limited in the embodiment of the present invention.
Optionally, in a specific implementation manner, the area scene information includes: image information and/or laser point clouds.
In this specific implementation manner, when the acquisition device mounted on the target robot includes an image sensor and/or a laser radar, the acquired area scene information includes image information and/or laser point cloud.
That is, when the acquisition device mounted on the target robot is an image sensor, the acquired regional scene information may be image information; when the acquisition equipment carried by the target robot is a laser radar, the acquired regional scene information can be laser point cloud; when the collecting devices carried by the target robot are an image sensor and a laser radar, the acquired regional scene information can be image information and laser point cloud.
After the area scene information is acquired, dynamic object detection can be performed on each area scene information, and when a dynamic object exists in the sub-area where the preset task is completed, the position information of the detected dynamic object is determined.
Wherein, the number of the detected dynamic objects can be one or more for each area scene information.
Alternatively, as shown in fig. 2, when the collecting device mounted on the target robot is an image sensor, the acquired area scene information may include image information of the sub-area that has completed the preset task. After the image information is acquired, whether a dynamic object exists in the sub-area may be determined based on a target recognition algorithm, for example, whether a person or a pet exists in the image information. Further, when it is determined that a dynamic object exists in the sub-area, the position of the dynamic object and the distance to the target robot may be recognized based on a visual algorithm, and position information on the dynamic object may be obtained. The visual algorithm may be an auto-supervision algorithm, or may be a gradient descent method or a newton method, which are all reasonable, and the visual algorithm is not specifically limited in the embodiment of the present invention.
Optionally, when the target robot carries multiple image sensors, the acquired regional scene information may include multiple pieces of image information of the sub-region where the preset task has been completed, and further, whether a dynamic object exists in the sub-region may be determined based on a target recognition algorithm, and when it is determined that a dynamic object exists in the sub-region, the position of the dynamic object and the distance between the dynamic object and the target robot may be predicted based on the disparity between the multiple pieces of image information, so as to obtain the position information about the dynamic object.
Optionally, as shown in fig. 3, when the collecting device carried by the target robot is a laser radar, the obtained area scene information may include the laser point cloud of the sub-area where the preset task is completed. After the laser point cloud is obtained, whether a dynamic object exists in the sub-area or not can be judged based on a point cloud algorithm, and then when the dynamic object exists in the sub-area, the position of the dynamic object and the distance between the dynamic object and the target robot can be identified based on the point cloud algorithm. The point cloud algorithm may be a 3D point cloud algorithm, or may be other algorithms, which are all reasonable.
Optionally, as shown in fig. 4, when the collection device mounted on the target robot is an image sensor and a laser radar, the acquired area scene information may include image information and a laser point cloud of the sub-area that has completed the preset task. After the image information and the laser point cloud are obtained, whether a dynamic object exists in the sub-area can be judged based on the projection of the laser point cloud on the image, then, when the dynamic object exists in the sub-area, the position of the dynamic object can be identified based on a visual algorithm, the target distance between the dynamic object and a target robot is estimated based on the point cloud corresponding to the dynamic object, and therefore the position information of the dynamic object is obtained.
After determining that a dynamic object exists in the sub-region where the preset task is completed, the position information of the dynamic object may be obtained based on the region scene information, and further, each passing position point of the dynamic object in the sub-region may be determined based on each determined position information.
The points corresponding to all the determined position information in the designated area can be determined as each position point of the dynamic object passing through the sub-area; or according to a preset selection rule, selecting a part of points from the points corresponding to all the determined position information in the designated area as each position point passed by the dynamic object in the sub-area. For example, among points corresponding to all the determined position information in the designated area, points having a distance greater than a designated distance from the obstacle are selected as position points where the dynamic object passes through the sub-area.
Optionally, in a specific implementation manner, the step 11 further includes the following steps 11A to 11D:
step 11A: determining the characteristic information of the detected dynamic object aiming at the scene information of the first area of the detected dynamic object, and adding an object identifier for the detected dynamic object;
step 11B: for each area scene information of which the acquisition time is after the acquisition time of the first area scene information, when the obtained detection result represents that a dynamic object exists in a sub-area, determining target characteristic information of the detected dynamic object, and judging whether specified characteristic information matched with the target characteristic information exists in the characteristic information of the dynamic object detected for the last area scene information; if so, executing step 11C; otherwise, step 11D is performed:
and step 11C: adding a specified object identifier for the dynamic object detected according to the regional scene information; wherein the designated object identification is: the characteristic information is an object identifier added to a dynamic object of the specified characteristic information;
step 11D: adding a new object identifier for the dynamic object detected aiming at the regional scene information;
accordingly, in this specific implementation manner, the step 113, based on the determined position information, of determining each position point, where the dynamic object existing in the sub-area that has completed the preset task passes, may include the following step 1131:
step 1131: and determining each position point through which the dynamic object passes in the sub-area which has finished the preset task based on the object identification added to all the detected dynamic objects and all the determined position information.
In this specific implementation manner, after obtaining the plurality of area scene information of the sub-area where the preset task has been completed, each obtained area scene information may be analyzed to determine whether a dynamic object exists in the area scene information. Furthermore, for the first region scene information of the detected dynamic object, the feature information of the detected dynamic object may be determined, and an object identifier may be added to the detected dynamic object.
And then, according to the sequence of acquiring the scene information of each region, carrying out dynamic object detection on the scene information of each region after the acquisition time of the scene information of the first region according to the scene information of each region, and obtaining a detection result. Therefore, when the obtained detection result represents that a dynamic object exists in the sub-area, the target characteristic information of the detected dynamic object is determined, and whether the specified characteristic information matched with the target characteristic information exists in the characteristic information of the dynamic object detected aiming at the scene information of the previous area or not is judged.
In the case where there is the specified feature information that matches the target feature information, it may be indicated that the dynamic object detected for the area scene information is included in the dynamic objects detected for the last area scene information, and thus, the same object id may be added to the dynamic object having the target feature information and the dynamic object having the specified feature information, and therefore, the specified object id added to the dynamic object having the specified feature information may be determined and the specified object id may be added to the dynamic object having the target feature information.
That is, in the case where there is specified feature information that matches the target feature information, a specified object identifier to which the feature information is added to the dynamic object whose feature information is specified feature information is added to the dynamic object whose feature information is detected for the area scene information as the target feature information; in this way, it is possible to track the same dynamic object by determining whether or not the feature information of the dynamic object detected for the continuous plural pieces of regional scene information is the same.
Accordingly, in the case where there is no specific feature information that matches the target feature information, it may be indicated that the dynamic object detected for the previous area scene information is not included in the dynamic objects detected for the area scene information, and therefore, the dynamic object having the target feature information detected for the area scene information may be regarded as a newly appearing dynamic object, and thus, a new object identifier may be added to the dynamic object having the target feature information.
The new object id is different from the object ids added to all dynamic objects detected before the scene information of the area.
Illustratively, for the first regional scene information of the detected dynamic object, detecting dynamic objects A, B and C, determining that the target feature information of the three dynamic objects is a, B and C respectively, and adding object marks to the three dynamic objects respectively, wherein an object mark a1 is added to the dynamic object a, an object mark B1 is added to the dynamic object B, and an object mark C1 is added to the dynamic object C; detecting next area scene information of the first area scene information, detecting dynamic objects D, E and F, determining that the target characteristic information of the three dynamic objects are a, B and F respectively, and matching the three target characteristic information a, B and F with the determined target characteristic information a, B and c respectively aiming at the first area scene information, so as to obtain that the target characteristic information of a dynamic object D is matched with the target characteristic information of a dynamic object A, and the target characteristic information of a dynamic object E is matched with the specified characteristic information of a dynamic object B; at this time, it may be determined that the dynamic object D and the dynamic object a are the same dynamic object, and then an object id a1 may be added to the dynamic object D, and the dynamic object E and the dynamic object B are the same dynamic object, and then an object id B1 may be added to the dynamic object E; in addition, since the target feature information of the dynamic object F is not matched with the determined target feature information for the first area scene information, i.e., a, b, and c, respectively, a new object identifier F1 may be added to the dynamic object F.
In this way, by adding the same object identifier to the same dynamic object detected according to different regional scene information and adding different object identifiers to different dynamic objects detected according to different regional scene information, after all the obtained regional scene information is detected, the corresponding relationship among the dynamic object, the object identifier and the position information can be established, so that each piece of position information of the dynamic object can be obtained according to the dynamic object with the same object identifier. Therefore, for each detected dynamic object, the respective position information of the dynamic object can be determined, and further, the respective position points through which the dynamic object passes can be determined.
Optionally, in a specific implementation manner, the step 1131, based on the object identifiers added to all the detected dynamic objects and all the determined position information, of determining the position points through which the dynamic objects existing in the sub-area that has completed the preset task pass, may include the following steps 21 to 22:
step 21: performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
step 22: and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes a preset task passes based on the plurality of dispersion points.
In this specific implementation manner, after adding the object identifier to the dynamic object in each area scene information, based on the added object identifier, a path framework may be performed on each position information of the dynamic object added with the same object identifier, so as to obtain an initial path of each dynamic object.
Because the time and the path of each dynamic object appearing in the sub-area where the preset task is completed can be different, the frequency of each dynamic object appearing in the collected area scene information is different, and the quantity of the collected position information of the dynamic objects is also different. Based on this, for a dynamic object with too little position information, the initial path of the dynamic object may not be constructed.
Optionally, the step 21 may include the following steps 211:
step 211: and for each object identifier, if the number of the position information of the dynamic object added with the object identifier is greater than the preset number, constructing a path for each position information of the dynamic object added with the object identifier to obtain an initial path.
In this specific implementation manner, for each object identifier, the number of the position information of the dynamic object to which the object identifier is added may be determined, and when the number of the added object identifiers is greater than the preset number, the path construction may be performed on each position information of the dynamic object to which the object identifier is added, so as to obtain an initial path.
Furthermore, for dynamic objects whose position information is not greater than the preset number, the position information of the dynamic objects can be ignored in the process of constructing the target path.
The preset number may be 5 or 10, which is reasonable and is not specifically limited in the embodiment of the present invention.
That is to say, for each object identifier, when the position information of the dynamic object to which the object identifier is added is greater than the preset number, it may be considered that the time that the dynamic object exists in the sub-region in which the preset task is completed is long, and further, a moving path may be constructed for the dynamic object to which the object identifier is added, so as to obtain an initial path of the dynamic object. And then, dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points.
In addition, since the speed, the step frequency, the step length, and the like of different dynamic objects during the moving process may be different, the length of the moving track of each dynamic object in the sub-region where the preset task is completed may be different, and for a dynamic object with a short initial path, the initial path of the dynamic object may not be discretized.
Then, for each initial path, the initial path may be discretized according to a preset discrete step length, and then, a plurality of discrete points may be obtained. In this way, based on the obtained plurality of discrete points, each position point through which the dynamic object passes in the sub-area where the preset task is completed can be determined.
The preset discrete step length may be set according to actual needs, for example, 3 meters and 5 meters, which are reasonable and not specifically limited in the embodiment of the present invention.
After obtaining the plurality of discrete points, all the discrete points can be determined as each position point through which a dynamic object existing in a sub-area which has completed a preset task passes; or selecting part of the discrete points from all the discrete points as each position point through which the dynamic object existing in the sub-area which has completed the preset task passes.
Optionally, the step 22 may include the following step 221:
step 221: and dispersing each initial path with the length larger than the preset length according to a preset dispersion step length to obtain a plurality of dispersion points.
In this specific implementation manner, after a plurality of initial paths are obtained, the length of each initial path may be determined, and the initial paths with lengths greater than a preset length are discretized according to a preset discrete step length to obtain a plurality of discrete points.
The preset length may be 8 meters or 20 meters, which is reasonable and is not particularly limited in the embodiment of the present invention.
That is, for each dynamic object, when the length of the initial path of the dynamic object is not greater than the preset length, the initial path of the dynamic object may be ignored in constructing the target path.
When determining each passing position point of each dynamic object in the sub-area where the preset task has been completed based on the obtained plurality of discrete points, the plurality of discrete points may be used as each passing position point of the dynamic object in the sub-area where the preset task has been completed, or a part of the plurality of discrete points may be selected as each passing position point of the dynamic object in the sub-area where the preset task has been completed according to a preset condition.
Compared with the method that the initial path of each dynamic object is directly and respectively moved along the determined initial path, the initial path is discretized, the obtained discrete points are used as a plurality of path points for constructing the target path, so that the target path passing through all the position points is obtained, the moving path of the target robot in the process of executing the preset task again can be better planned, the length of the path passing through all the position points is shortened, and further, when the target robot is controlled to move along the target path and execute the preset task again, the task execution efficiency of the target robot can be improved, and the energy consumption of the target robot is reduced.
To facilitate an understanding of this particular implementation, reference is made to fig. 5 for an example.
As shown in fig. 5, the initial path 1, the initial path 2, and the initial path 3 are: and respectively constructing the obtained initial paths of the three objects based on the detected position information of the three dynamic objects.
And then, dispersing each initial path according to a preset dispersion step length to obtain a plurality of dispersion points. Dispersing the initial path 1 to obtain a discrete point a, a discrete point b, a discrete point c, a discrete point d and a discrete point e; dispersing the initial path 2 to obtain a dispersion point f, a dispersion point g, a dispersion point h, a dispersion point i, a dispersion point j and a dispersion point k; and (4) dispersing the initial path 3 to obtain a dispersion point l, a dispersion point m, a dispersion point n and a dispersion point o. Further, the obtained discrete points a-o can be respectively used as the passing position points of the dynamic object in the sub-area which has completed the preset task.
Considering that an obstacle may exist in a designated area and the position points may be distributed near the obstacle, when constructing a target route passing through all the route points, it may be considered whether an obstacle exists between every two of the position points, and furthermore, an obstacle exists between every two of the position points, and a route between the two position points needs to be planned while bypassing the obstacle.
On the basis of the foregoing specific implementation manners, optionally, in a specific implementation manner, the step S102, constructing a target path passing through all the location points based on the determined location relationships of all the location points, may include the following step 31:
step 31: constructing a target path which takes the current position of the target robot as a starting point and passes through all the position points based on the determined position relations of all the position points and the communication relation between every two position points in all the position points;
wherein, the communication relation is as follows: a relationship determined based on the presence of an obstacle between each two location points.
In this specific implementation manner, after all the position points through which the dynamic object passes are obtained, it may be determined whether an obstacle exists between every two of the all the position points, and then, a communication relationship between every two of the all the position points is determined.
Optionally, if an obstacle exists between any two position points, the communication relationship between the two position points is disconnected; on the contrary, if no obstacle exists between any two position points, the communication relation between the two position points is communication.
In this way, after determining the communication relationship between each two of the total position points, a target route that passes through the total position points with the current position of the robot as a starting point may be constructed based on the position relationship of the total position points and the communication relationship between each two of the total position points.
For example, a path planning algorithm, a genetic algorithm, etc. are used to construct the target path. For example, the adopted path planning algorithm may be Dijkstra (Dijkstra) algorithm, and the like, and the embodiment of the present invention is not particularly limited.
Optionally, the current position of the target robot is used as a starting point, and among the position points communicated with the current position of the robot, a position point with the minimum distance to the current position of the target robot is determined as a second path point of the target path, and the starting point and the second path point are connected; then, among the position points communicated with the second path point, among the position points which are not determined as path points, determining a position point with the minimum distance with the second path point as a third path point of the target path, and connecting the second path point and the third path point; and by analogy, traversing all path points, sequentially determining each path point of the target path, and sequentially connecting each path point from first to last according to the sequence of determining each path point to obtain the target path passing through all the position points.
Illustratively, as shown in fig. 6, the points 1-15 are all discrete points obtained by dispersing three initial paths on the basis of building the three initial paths, and all the points 1-15 are determined as all position points where the dynamic object passes in the sub-area where the preset task is completed, the circle is the target robot, and the rectangle is the obstacle. Thus, based on the positional relationship and the communication relationship between the points 1 to 15, a target path as shown in fig. 6 can be constructed in which the order in which the target robot passes through all the position points in turn is: point 1-point 2-point 3-point 4-point 5-point 6-point 7-point 8-point 9-point 10-point 11-point 12-point 13-point 14-point 1-point 15.
By adopting the method embodiment of the invention, different dynamic objects in the designated area can be distinguished, and the respective tracks of the distinguished different dynamic objects are accurately constructed, so that the cleaning robot can clean according to the accurately constructed optimized path, the degree of error confusion or redundancy in path planning caused by incapability of distinguishing the target is reduced, and the cleaning efficiency of the cleaning robot can be improved.
Based on the same inventive concept, the embodiment of the present invention further provides a robot control apparatus corresponding to the robot control method shown in fig. 1 provided in the embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a robot control apparatus according to an embodiment of the present invention, and as shown in fig. 7, the apparatus may include:
the sensor 710 is used for acquiring the movement information of the target robot and the spatial information of the environment where the target robot is located when the target robot executes a preset task on a specified area;
a processor 720, configured to determine a sub-region where the preset task has been completed according to the movement information of the target robot acquired by the sensor and the spatial information of the environment where the target robot is located; when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed; after the target robot completes the preset task on the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points; and controlling the target robot to move along the target path and executing the preset task.
As can be seen from the above, with the solution provided by the embodiment of the present invention, after the target robot completes the preset task for the designated area, the target robot may execute the preset task again for the passing position point of each dynamic object in the sub-area where the preset task has been completed in the designation process of the preset task, so as to avoid a situation that the execution result of the preset task for the passing position point of the dynamic object is damaged and the execution effect required by the preset task cannot be achieved due to the movement of the dynamic object, so that the preset task may be executed again for the passing position point of the dynamic object, and each position in the designated area may achieve the execution effect required by the preset task.
When the cleaning robot executes a cleaning task on a designated area, each position point passed by a person, a pet and the like in the cleaned sub-area can be recorded, after the cleaning robot finishes the cleaning task on the designated area, the cleaning robot is controlled to move along a target path constructed by all the determined position points, and each recorded position point passed by the person, the pet and the like is secondarily cleaned to remove secondary pollution caused by the movement of the person, the pet and the like on the cleaned area. Therefore, the cleaning effect of the cleaning robot can be enhanced.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
and tracking the dynamic object existing in the sub-area which has completed the preset task by utilizing a tracking algorithm to obtain each position point through which the dynamic object passes.
Alternatively, in one particular implementation,
the sensor 710 is further configured to acquire regional scene information of the target robot during the moving process;
the processor 720 is specifically configured to acquire region scene information acquired at preset time intervals, perform dynamic object detection on each region scene information to obtain a detection result, and determine position information of a detected dynamic object when the detection result indicates that a dynamic object exists in a sub-region where the preset task has been completed; and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
determining the characteristic information of the detected dynamic object aiming at the scene information of the first area of the detected dynamic object, and adding an object identifier for the detected dynamic object;
for each area scene information of which the acquisition time is after the acquisition time of the first area scene information, when the obtained detection result represents that a dynamic object exists in the sub-area, determining target characteristic information of the detected dynamic object, and judging whether specified characteristic information matched with the target characteristic information exists in the characteristic information of the dynamic object detected for the last area scene information;
if the dynamic object exists, adding a specified object identifier for the dynamic object detected by the area scene information; wherein the specified object identification is: the characteristic information is an object identifier added to the dynamic object of the specified characteristic information;
if the dynamic object does not exist in the area scene information, adding a new object identifier for the dynamic object detected by the area scene information;
and determining each position point through which the dynamic object passes in the sub-area which has completed the preset task based on the object identification added to all the detected dynamic objects and all the determined position information.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes the preset task passes based on the plurality of dispersion points.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
and for each object identifier, if the number of the position information of the dynamic object added with the object identifier is greater than the preset number, constructing a path for each position information of the dynamic object added with the object identifier to obtain an initial path.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
and dispersing each initial path with the length larger than the preset length according to a preset dispersion step length to obtain a plurality of dispersion points.
Optionally, in a specific implementation manner, the area scene information includes: image information and/or laser point clouds.
Optionally, in a specific implementation manner, the processor 720 is specifically configured to:
constructing a target path which takes the current position of the target robot as a starting point and passes through all the position points based on the determined position relation of all the position points and the communication relation between every two position points in all the position points;
wherein, the communication relation is as follows: a relationship determined based on the presence of an obstacle between each two location points.
Optionally, in a specific implementation manner, the target robot is a cleaning robot, and the preset task is a cleaning task.
The embodiment of the present invention further provides a robot, as shown in fig. 8, including a processor 801, a communication interface 802, a memory 803 and a communication bus 804, where the processor 801, the communication interface 802 and the memory 803 complete mutual communication through the communication bus 804,
a memory 803 for storing a computer program;
the processor 801 is configured to implement the steps of any of the robot control methods according to the embodiments of the present invention described above when executing the program stored in the memory 803.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, which, when being executed by a processor, realizes the steps of any of the robot control methods described above.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the robot control methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, robot embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity as they are substantially similar to method embodiments, where relevant to some of the descriptions of method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (15)

1. A robot control method, characterized in that the method comprises:
when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area where the preset task is completed;
after the target robot completes the preset task on the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points;
and controlling the target robot to move along the target path and executing the preset task.
2. The method according to claim 1, wherein the determining of the position points through which the dynamic object existing in the sub-area having completed the preset task passes comprises:
and tracking the dynamic object existing in the sub-area which has completed the preset task by utilizing a tracking algorithm to obtain each position point through which the dynamic object passes.
3. The method according to claim 2, wherein the tracking, by using a tracking algorithm, the dynamic object existing in the sub-area where the preset task has been completed to obtain each position point through which the dynamic object passes, comprises:
acquiring regional scene information acquired according to a preset time interval;
performing dynamic object detection on the scene information of each area to obtain a detection result, and determining the position information of the detected dynamic object when the detection result represents that the dynamic object exists in the sub-area which has completed the preset task;
and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
4. The method of claim 3, further comprising:
determining the characteristic information of the detected dynamic object aiming at the scene information of the first area of the detected dynamic object, and adding an object identifier for the detected dynamic object;
for each area scene information of which the acquisition time is after the acquisition time of the first area scene information, when the obtained detection result represents that a dynamic object exists in the sub-area, determining target characteristic information of the detected dynamic object, and judging whether specified characteristic information matched with the target characteristic information exists in the characteristic information of the dynamic object detected for the last area scene information;
if the dynamic object exists, adding a specified object identifier for the dynamic object detected by the area scene information; wherein the specified object identification is: the characteristic information is an object identifier added to the dynamic object of the specified characteristic information;
if the dynamic object does not exist in the area scene information, adding a new object identifier for the dynamic object detected by the area scene information;
the determining, based on the determined position information, position points through which the dynamic object passes in the sub-region where the preset task has been completed includes:
and determining each position point through which the dynamic object passes in the sub-area which has completed the preset task based on the object identification added to all the detected dynamic objects and all the determined position information.
5. The method according to claim 4, wherein the determining, based on the object identifiers added to all the detected dynamic objects and all the determined position information, each position point through which a dynamic object existing in the sub-area that has completed the preset task passes comprises:
performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes the preset task passes based on the plurality of dispersion points.
6. The method according to claim 5, wherein the performing path construction on each piece of position information of the dynamic object added with the same object identifier to obtain at least one initial path comprises:
and for each object identifier, if the number of the position information of the dynamic object added with the object identifier is greater than the preset number, constructing a path for each position information of the dynamic object added with the object identifier to obtain an initial path.
7. The method according to claim 5 or 6, wherein discretizing each initial path according to a preset discrete step to obtain a plurality of discrete points comprises:
and dispersing each initial path with the length larger than the preset length according to a preset dispersion step length to obtain a plurality of dispersion points.
8. The method of claim 3, wherein the regional scene information comprises: image information and/or laser point clouds.
9. The method according to claim 1, wherein constructing the target path passing through all the position points based on the determined position relations of all the position points comprises:
constructing a target path which takes the current position of the target robot as a starting point and passes through all the position points based on the determined position relation of all the position points and the communication relation between every two position points in all the position points;
wherein, the communication relation is as follows: a relationship determined based on the presence of an obstacle between each two location points.
10. The method of claim 1, wherein the target robot is a cleaning robot and the predetermined task is a cleaning task.
11. A robot control apparatus, characterized in that the apparatus comprises:
the system comprises a sensor, a controller and a controller, wherein the sensor is used for acquiring the movement information of a target robot and the space information of the environment where the target robot is located when the target robot executes a preset task on a specified area;
the processor is used for determining a sub-region which has completed the preset task according to the movement information of the target robot acquired by the sensor and the space information of the environment where the target robot is located; when a target robot executes a preset task on a specified area, determining each position point through which a dynamic object passes in a sub-area which finishes the preset task; after the target robot completes the preset task on the designated area, constructing a target path passing through all the position points based on the determined position relation of all the position points; and controlling the target robot to move along the target path and executing the preset task.
12. The apparatus of claim 11,
the sensor is also used for acquiring the regional scene information of the target robot in the moving process;
the processor is specifically configured to acquire region scene information acquired at preset time intervals, perform dynamic object detection on each region scene information to obtain a detection result, and determine position information of a detected dynamic object when the detection result indicates that the dynamic object exists in a sub-region where the preset task is completed; and determining each position point through which the dynamic object existing in the sub-area which has completed the preset task passes based on each determined position information.
13. The apparatus of claim 11, wherein the processor is specifically configured to:
performing path construction on each position information of the dynamic object added with the same object identifier to obtain at least one initial path;
and dispersing the initial path according to a preset dispersion step length to obtain a plurality of dispersion points aiming at each initial path, and determining each position point through which a dynamic object in a sub-region which finishes the preset task passes based on the plurality of dispersion points.
14. A robot is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-10 when executing a program stored in the memory.
15. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-10.
CN202210712326.1A 2022-06-22 2022-06-22 Robot control method and device and robot Pending CN115040038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210712326.1A CN115040038A (en) 2022-06-22 2022-06-22 Robot control method and device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210712326.1A CN115040038A (en) 2022-06-22 2022-06-22 Robot control method and device and robot

Publications (1)

Publication Number Publication Date
CN115040038A true CN115040038A (en) 2022-09-13

Family

ID=83163645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210712326.1A Pending CN115040038A (en) 2022-06-22 2022-06-22 Robot control method and device and robot

Country Status (1)

Country Link
CN (1) CN115040038A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110946509A (en) * 2018-09-27 2020-04-03 广东美的生活电器制造有限公司 Sweeping method of sweeping robot and sweeping device of sweeping robot
CN111643011A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium
CN111700546A (en) * 2020-06-24 2020-09-25 深圳市银星智能科技股份有限公司 Cleaning method of mobile robot and mobile robot
WO2020248458A1 (en) * 2019-06-14 2020-12-17 江苏美的清洁电器股份有限公司 Information processing method and apparatus, and storage medium
CN112168066A (en) * 2020-09-30 2021-01-05 深圳市银星智能科技股份有限公司 Control method and device for cleaning robot, cleaning robot and storage medium
CN113156956A (en) * 2021-04-26 2021-07-23 珠海市一微半导体有限公司 Robot navigation method, chip and robot
CN113219985A (en) * 2021-05-27 2021-08-06 九天创新(广东)智能科技有限公司 Road planning method and device for sweeper and sweeper
CN113238247A (en) * 2021-03-30 2021-08-10 陈岳明 Robot positioning and navigation method, device and equipment based on laser radar
CN113509104A (en) * 2021-04-25 2021-10-19 珠海格力电器股份有限公司 Cleaning method, storage medium and cleaning robot
CN113520246A (en) * 2021-07-30 2021-10-22 珠海一微半导体股份有限公司 Mobile robot compensation cleaning method and system
CN113907651A (en) * 2021-10-21 2022-01-11 珠海一微半导体股份有限公司 Control method and chip for bipartite robot and bipartite robot
CN114129092A (en) * 2021-12-08 2022-03-04 上海景吾智能科技有限公司 Cleaning area planning system and method for cleaning robot

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110946509A (en) * 2018-09-27 2020-04-03 广东美的生活电器制造有限公司 Sweeping method of sweeping robot and sweeping device of sweeping robot
WO2020248458A1 (en) * 2019-06-14 2020-12-17 江苏美的清洁电器股份有限公司 Information processing method and apparatus, and storage medium
CN111643011A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium
CN111700546A (en) * 2020-06-24 2020-09-25 深圳市银星智能科技股份有限公司 Cleaning method of mobile robot and mobile robot
CN112168066A (en) * 2020-09-30 2021-01-05 深圳市银星智能科技股份有限公司 Control method and device for cleaning robot, cleaning robot and storage medium
CN113238247A (en) * 2021-03-30 2021-08-10 陈岳明 Robot positioning and navigation method, device and equipment based on laser radar
CN113509104A (en) * 2021-04-25 2021-10-19 珠海格力电器股份有限公司 Cleaning method, storage medium and cleaning robot
CN113156956A (en) * 2021-04-26 2021-07-23 珠海市一微半导体有限公司 Robot navigation method, chip and robot
CN113219985A (en) * 2021-05-27 2021-08-06 九天创新(广东)智能科技有限公司 Road planning method and device for sweeper and sweeper
CN113520246A (en) * 2021-07-30 2021-10-22 珠海一微半导体股份有限公司 Mobile robot compensation cleaning method and system
CN113907651A (en) * 2021-10-21 2022-01-11 珠海一微半导体股份有限公司 Control method and chip for bipartite robot and bipartite robot
CN114129092A (en) * 2021-12-08 2022-03-04 上海景吾智能科技有限公司 Cleaning area planning system and method for cleaning robot

Similar Documents

Publication Publication Date Title
US20220074762A1 (en) Exploration Of A Robot Deployment Area By An Autonomous Mobile Robot
US10712749B2 (en) Discovery and monitoring of an environment using a plurality of robots
CN111000498B (en) Sweeping robot, and method, device and storage medium for setting sweeping area of sweeping robot
US9691151B1 (en) Using observations from one or more robots to generate a spatio-temporal model that defines pose values for a plurality of objects in an environment
CN109804325A (en) Method for controlling autonomous mobile robot
JP6995843B2 (en) Work management system and work management method
Le et al. Evaluation of out-of-the-box ros 2d slams for autonomous exploration of unknown indoor environments
CN113219992A (en) Path planning method and cleaning robot
WO2023098455A1 (en) Operation control method, apparatus, storage medium, and electronic apparatus for cleaning device
Ojha et al. Affordable multiagent robotic system for same-level fall hazard detection in indoor construction environments
Zheng et al. A hierarchical approach for mobile robot exploration in pedestrian crowd
US20200209876A1 (en) Positioning method and apparatus with the same
CN113110499B (en) Determination method of traffic area, route searching method, robot and chip
US9310251B2 (en) Automated object classification using temperature profiles
CN114359692A (en) Room identification method and device, electronic equipment and storage medium
JP7221381B2 (en) Target tracking method and apparatus
JP4634334B2 (en) Position estimation system and position estimation method
WO2024007807A1 (en) Error correction method and apparatus, and mobile device
CN115040038A (en) Robot control method and device and robot
JP6397706B2 (en) Flow line editing apparatus, flow line editing method, and flow line editing program
CN113744329A (en) Automatic region division and robot walking control method, system, equipment and medium
Zhang et al. Reidentification-based automated matching for 3D localization of workers in construction sites
Wattanavekin et al. Mobile robot exploration by using environmental boundary information
JP2022034861A (en) Forklift, location estimation method, and program
KR102169420B1 (en) Behavioral feature analysis system and behavioral feature analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination