CN114643580A - Robot control method, device and equipment - Google Patents

Robot control method, device and equipment Download PDF

Info

Publication number
CN114643580A
CN114643580A CN202210325814.7A CN202210325814A CN114643580A CN 114643580 A CN114643580 A CN 114643580A CN 202210325814 A CN202210325814 A CN 202210325814A CN 114643580 A CN114643580 A CN 114643580A
Authority
CN
China
Prior art keywords
robot
camera
sub
area
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210325814.7A
Other languages
Chinese (zh)
Other versions
CN114643580B (en
Inventor
王春茂
张文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN202210325814.7A priority Critical patent/CN114643580B/en
Publication of CN114643580A publication Critical patent/CN114643580A/en
Application granted granted Critical
Publication of CN114643580B publication Critical patent/CN114643580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot control method, a device and equipment, wherein a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset and different camera code, and the camera codes corresponding to adjacent sub-areas are different, and the method comprises the following steps: predicting the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot; determining whether the number of robots in the next sub-area reaches a preset threshold value at the moment of crossing; and if not, sending the camera codes which are not occupied and correspond to the next sub-region at the time of crossing the boundary to the robot, so that the robot determines anti-interference parameters corresponding to the camera codes after reaching the next sub-region, and controls the TOF cameras deployed on the robot to work based on the anti-interference parameters, thereby realizing the anti-interference of a large number of robots with the TOF cameras.

Description

Robot control method, device and equipment
Technical Field
The application relates to the technical field of robots, in particular to a robot control method, device and equipment.
Background
The principle of TOF (Time of Flight) camera is: TOF cameras emit modulated light that is reflected after encountering an object. The TOF camera calculates the distance between the TOF camera and the object by calculating the time difference or phase difference between the light ray emission and the reflection, and generates a depth image or a three-dimensional image of the object.
Since the TOF camera can detect the distance between the TOF camera and an object, in the field of robots, the TOF camera is usually deployed on a robot to realize obstacle avoidance of the robot. However, when there are multiple robots deploying TOF cameras in one motion region, different TOF cameras can create interference problems. For example, light emitted by a TOF camera a deployed on a robot a is received by a TOF camera B on a robot B, so that the TOF camera B cannot normally calculate the distance from an object to the TOF camera B, and the robot B cannot correctly avoid an obstacle.
In order to solve the problem of interference of a plurality of TOF cameras, a plurality of anti-interference methods are provided, but the existing anti-interference method of a plurality of TOF cameras can only prevent the interference of a limited number of TOF cameras in a certain area. In an actual intelligent warehousing scene, the number of the robots with the TOF cameras can reach hundreds, and the anti-interference effect of the existing TOF camera anti-interference method is poor in the scene.
Disclosure of Invention
In view of this, the present application provides a robot control method, apparatus, and device, which are used to achieve interference resistance of a large number of TOF cameras.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the application, a robot control method is provided, the method is applied to a server, a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-areas are different; each sub-region has a different camera code assigned to the robot within the sub-region, the method comprising:
predicting the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot;
determining whether the number of robots in the next sub-area reaches a preset threshold at the moment of crossing;
and if not, sending the camera code which is not occupied and corresponds to the next sub-region at the time of crossing the boundary to the robot, so that the robot determines the anti-interference parameter corresponding to the camera code after reaching the next sub-region, and controlling the TOF camera deployed on the robot to work based on the anti-interference parameter.
Optionally, the method further includes:
if so, controlling the robot to stop moving;
and when the number of the robots in the next sub-region is monitored to be lower than the preset threshold value, triggering the robots to move, and allocating unoccupied camera codes corresponding to the next sub-region to the robots, so that after the robots reach the next sub-region, anti-interference parameters corresponding to the camera codes and matched with a locally configured anti-interference method are determined, and TOF cameras deployed on the robots are controlled to work based on the anti-interference parameters.
Optionally, the method further includes:
and when the robot is detected to enter the next subarea, recovering the camera code of the robot in the subarea.
Optionally, the size of the sub-region is related to a distance threshold at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise light-emitting time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
and under the condition that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameters comprise camera codes.
Optionally, the divided sub-regions are arranged in a honeycomb shape.
Optionally, if the current position information is reported by the robot when it is determined that the distance from the current position to the area boundary of the robot is within a preset range, the robot control method is executed when the current position information reported by the robot is received;
or,
and if the current position information is periodically reported by the robot, executing the robot control method when the distance from the current position to the zone boundary of the robot is determined to be within a preset range.
Optionally, the motion information includes: the path information of the robot reaching the task target and the motion rate of the robot;
predicting the boundary-crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot, wherein the step comprises the following steps:
determining a next sub-area to be reached by the robot based on the path information of the robot reaching the task target, the current position information and the division condition of the motion area, and determining the area boundary between the sub-area and the next sub-area;
determining a distance from a current position to the boundary of the area based on the path information of the robot reaching the task target and the current position information;
and determining the boundary crossing time when the robot reaches the boundary of the area based on the distance and the movement speed of the robot.
According to a second aspect of the present application, there is provided a robot control method applied to a robot provided with a TOF camera; the motion area of the robot is divided into at least one subarea; each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to the adjacent sub-areas are different; each sub-region has a different camera code assigned to the robot within the sub-region, the method comprising:
reporting the current position information of the robot to a server, so that the server predicts the boundary crossing time when the robot reaches the boundary of the sub-region and the next sub-region based on the reported current position information and the obtained motion information of the robot, and predicts and issues the unoccupied camera codes corresponding to the next sub-region at the boundary crossing time when the number of robots in the next sub-region at the boundary crossing time is determined to be less than a preset threshold value;
receiving a camera code issued by the server;
and determining an anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after the robot reaches the next sub-region.
Optionally, when the working mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter includes a light-emitting time delay; the server is synchronous with TOF camera clocks deployed on the robots;
the determining of the anti-interference parameter corresponding to the camera code and controlling the operation of the TOF camera on the robot based on the anti-interference parameter after the anti-interference parameter reaches the next sub-region comprises:
determining a light-emitting time delay corresponding to the camera code; wherein, the light-emitting time delays corresponding to different camera codes are different;
and sending the light-emitting time delay control to a TOF camera deployed on the robot so that the TOF camera emits light according to the light-emitting time delay.
Optionally, in a case that the operation mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter includes a modulation frequency;
the determining of the anti-interference parameter corresponding to the camera code and controlling the operation of the TOF camera on the robot based on the anti-interference parameter after the anti-interference parameter reaches the next sub-region comprises:
determining a modulation frequency corresponding to the camera code; wherein, the modulation frequencies corresponding to different camera codes are different;
sending the modulation frequency to a TOF camera deployed by the robot to enable the TOF camera to send light modulated by the modulation frequency.
Optionally, in a case that the operation mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code;
the determining of the anti-interference parameters corresponding to the camera codes and matched with the locally configured anti-interference method and controlling the operation of the TOF camera on the robot based on the anti-interference parameters comprise:
and sending the camera code to a TOF camera on the robot, so that the TOF camera takes the camera code as a code word of laser pulse code, codes the laser pulse to be transmitted according to the code word, and sends the coded laser pulse.
According to a third aspect of the present application, a robot control apparatus is provided, where the apparatus is applied to a server, a motion area where a robot configured with TOF cameras is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-areas are different; each subarea has a different camera code assigned to the robot in the subarea, and the device comprises:
the prediction unit is used for predicting the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot;
the determining unit is used for determining whether the number of the robots in the next sub-area reaches a preset threshold value at the moment of crossing the boundary;
and the issuing unit is used for issuing the camera codes which are not occupied and correspond to the next sub-region at the time of crossing the boundary to the robot if the camera codes are not occupied, so that the anti-interference parameters corresponding to the camera codes are determined after the robot reaches the next sub-region, and the TOF cameras deployed on the robot are controlled to work based on the anti-interference parameters.
According to a fourth aspect of the present application, there is provided a robot control apparatus applied to a robot provided with a TOF camera; the motion area of the robot is divided into at least one subarea; each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to the adjacent sub-areas are different; each subarea has a different camera code assigned to the robot in the subarea, and the device comprises:
the reporting unit is used for reporting the current position information of the robot to the server, so that the server predicts the boundary crossing time when the robot reaches the zone boundary of the local sub-zone and the next sub-zone based on the reported current position information and the obtained motion information of the robot, and when the number of robots in the next sub-zone does not reach a preset threshold value at the boundary crossing time, the server predicts and issues the unoccupied camera codes corresponding to the next sub-zone at the boundary crossing time;
the receiving unit is used for receiving the camera code issued by the server;
and the determining unit is used for determining an anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to work based on the anti-interference parameter after the robot reaches the next sub-region.
According to a fifth aspect of the present application, there is provided an electronic device comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is used for reading the machine executable instructions on the readable storage medium and executing the instructions to realize the robot control method.
The application provides a robot control method for zone control, wherein a motion area of a robot is divided into a plurality of sub-areas, and each sub-area adopts an existing anti-interference mode. On the one hand, the server side ensures that the number of TOF cameras in each sub-area is smaller than the maximum number of TOF cameras supported by the existing anti-interference mode by controlling the motion of the robot, and ensures that a plurality of TOF cameras in each sub-area do not interfere with each other. On the other hand, different camera codes representing anti-interference parameters are configured for adjacent sub-regions, so that the multiple TOF cameras of the adjacent sub-regions do not interfere with each other. Based on the two aspects, the interference resistance of the largest number of TOF cameras supported by the existing interference resistance method and the interference resistance of the TOF cameras in the adjacent areas can be realized in each sub-area, so that the problem of the interference of a large number of TOF cameras is solved.
Drawings
FIG. 1 is a schematic diagram of a networking architecture of a robotic control system shown in an exemplary embodiment of the present application;
FIG. 2 is a schematic view of a motion region partition shown in an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a robot control method shown in an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a robot motion profile shown in an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating a method of robot control according to an exemplary embodiment of the present application;
FIG. 6 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a robot control apparatus shown in an exemplary embodiment of the present application;
fig. 8 is a block diagram of another robot control device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The existing anti-interference method can only prevent a certain number of TOF cameras in a certain area from not interfering with each other. For scenes with a large number of TOF cameras, the existing anti-interference mode is poor in effect.
In view of this, the present application provides a partition-controlled robot control method, in which a motion area of a robot is divided into a plurality of sub-areas, and each sub-area adopts an existing anti-interference mode. On the one hand, the server side ensures that the number of TOF cameras in each sub-area is smaller than the maximum number of TOF cameras supported by the existing anti-interference mode by controlling the motion of the robot, and ensures that a plurality of TOF cameras in each sub-area do not interfere with each other. On the other hand, different camera codes representing anti-interference parameters are configured for adjacent sub-regions, so that the multiple TOF cameras of the adjacent sub-regions do not interfere with each other. Based on the two aspects, the interference resistance of the largest number of TOF cameras supported by the existing interference resistance method and the interference resistance of the TOF cameras in the adjacent areas can be realized in each sub-area, so that the problem of the interference of a large number of TOF cameras is solved.
Specifically, in the present application, a motion region where a robot configured with a TOF camera is located is divided into at least one sub-region, each sub-region corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-regions are different.
And the service end predicts the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the acquired motion information of the robot. The server side can determine whether the number of the robots in the next sub-area reaches a preset threshold value at the boundary crossing moment. If not, the server predicts the camera codes which correspond to the next sub-region at the boundary-crossing time and are not occupied before the boundary-crossing time is reached, and issues the predicted camera codes to the robot, so that the anti-interference parameters corresponding to the camera codes are determined after the robot reaches the next sub-region, and the TOF cameras deployed on the robot are controlled to work based on the anti-interference parameters.
Therefore, on one hand, in the application, the motion area of the robot is divided into a plurality of sub-areas, and each sub-area adopts the existing anti-interference mode. The server side adopts a strategy of zone control, before the robot crosses a zone, whether the number of robots in the next sub-zone reaches a preset threshold value is determined, and when the number of robots in the next sub-zone does not reach the preset threshold value, the robot is controlled to move to the next sub-zone and unoccupied camera codes corresponding to the next sub-zone are distributed to the robot, so that the number of TOF cameras in each sub-zone is controlled not to exceed the maximum number of TOF cameras supported by the existing anti-interference mode, and TOF cameras in each sub-zone are guaranteed not to interfere with each other.
On the other hand, the camera codes corresponding to the adjacent sub-regions are different, and different camera codes correspond to different anti-interference parameters, so that the anti-interference parameters adopted by the TOF cameras on the robot in the adjacent sub-regions are different, and the plurality of TOF cameras in the adjacent sub-regions do not interfere with each other.
Referring to fig. 1, fig. 1 is a schematic diagram of a networking architecture of a robot control system according to an exemplary embodiment of the present application.
In the present application, a networking architecture of a robot control system includes: a server and a plurality of robots equipped with TOF cameras.
The robot can be connected with the server through a wireless network, and communication between the robot and the server is achieved.
The server is used for managing the robot, and the server may be a device such as a central server and a data center, and here, the server is only exemplarily described and is not specifically limited.
The robot refers to a movable robot, and can include: AGVs (automated Guided vehicles, also called robots), industrial robots, consumer robots, entertainment robots, unmanned aerial vehicles, etc., and the robots are only exemplified and not specifically limited herein.
Before introducing the robot control method provided by the present application, the sub-region division method provided by the present application and the camera code corresponding to each sub-region are introduced.
1) Division mode of sub-area
In the present application, a motion region in which a robot equipped with a TOF camera is located is divided into at least one sub-region.
For example, as shown in fig. 2, in an alternative implementation, a motion region in which the robot equipped with the TOF camera is located may be divided into at least one sub-region by means of cellular division, so that the divided sub-regions are arranged in a honeycomb manner. Of course, in practical applications, the sub-area may be divided in other manners, such as dividing the sub-area into a plurality of rectangular blocks. The division manner of the sub-region is only exemplarily illustrated here, and is not particularly limited.
2) Size of the sub-area
In the present application, the sub-regions may be divided based on a distance threshold at which any two TOF cameras do not interfere. In other words, the size of the sub-region is related to a distance threshold at which any two TOF cameras do not interfere. Such as the size of the sub-region being greater than or equal to a distance threshold at which no interference occurs between any two TOF cameras.
The distance threshold value at which any two TOF cameras do not interfere is the minimum distance at which any two TOF cameras do not interfere. When the distance between the two TOF cameras is larger than or equal to the distance threshold value, the light emitted by any one of the TOF cameras cannot reach the other TOF camera due to the limitation of the emission power of the TOF camera, so that the TOF cameras do not interfere with each other.
In the application, when the size of the sub-region is larger than or equal to the distance threshold value that no interference occurs between any two TOF cameras, it can be ensured that TOF cameras in any two sub-regions spaced by one sub-region do not interfere with each other.
For example, as shown in fig. 2, one hexagon in fig. 2 is one sub-region. Assuming that interference does not occur when the distance between the two TOF cameras is at least 10 meters, the distance threshold is determined to be 10 meters.
When dividing the sub-regions, it is ensured that the diagonal line of each hexagon (i.e., the size of the sub-region) in fig. 2 is greater than or equal to 10 meters.
The size of the sub-region may be the length, width, diagonal, diameter, etc. of the sub-region. For example, the sub-region is a square, and the size of the sub-region may be the side length of the sub-region or the diagonal of the sub-region. For another example, when the sub-region is circular, the size of the sub-region may be the diameter of the sub-region. The sub-region size is only exemplarily illustrated here and is not particularly limited.
3) Camera coding corresponding to sub-regions
In this application, the notion of camera code has been proposed, and different camera codes correspond different anti-interference parameters, and many TOF cameras utilize different anti-interference parameters can realize the anti-interference each other of many TOF cameras.
Specifically, in the embodiment of the present application, at least one camera code is configured for each sub-region. In other words, each sub-region corresponds to at least one camera code. Each sub-region has a different camera code assigned to the robot in the sub-region, so that TOF in the same sub-region can be guaranteed not to interfere with each other when working simultaneously.
In order to prevent TOF cameras of adjacent sub-regions from interfering with each other, the camera codes corresponding to the adjacent sub-regions are set to be different. When the robot moves in a certain sub-region, the TOF cameras deployed on the robot can work by using anti-interference parameters corresponding to one camera code corresponding to the sub-region, and because the TOF cameras in adjacent sub-regions have different machine codes, the TOF cameras in the adjacent sub-regions adopt different anti-interference parameters, so that a plurality of TOF cameras in the adjacent sub-regions can be ensured not to interfere with each other.
4) Camera coding structure
In an embodiment of the present application, a camera encoding includes: region coding and robot coding.
The region code refers to a code corresponding to each region. In the present application, the region codes of adjacent regions are different. The region codes corresponding to non-adjacent sub-regions may be the same or different, and are only exemplary and not specifically limited herein.
The robot code is a code for setting a robot in the sub-area in advance. The number of robot codes corresponding to each sub-region is the maximum number of TOF cameras allowed by the sub-region and not interfered. For example, the existing anti-interference mode can ensure that the operation of the N TOF cameras in the sub-region is not interfered at most. The robot code may be the robot code previously assigned to the N TOF cameras. The robot code in different sub-regions may be the same or different.
For example, in the present application, the area codes of the adjacent sub-areas are set to be different, but the robot codes corresponding to different sub-areas are the same. As shown in fig. 2, each hexagon in fig. 2 represents a sub-region, and the numbers in the hexagons represent the region code of the sub-region.
Assuming that the maximum number of TOF cameras allowed for each sub-region without interference is 3, the robot code for each sub-region is 01, 10, and 11.
As shown in fig. 2, assuming that the region code of the sub-region represented by the hexagon at the center of fig. 2 is 00, and the robot codes corresponding to the sub-region are 01, 10 and 11, the camera codes corresponding to the sub-region are 0001, 0010 and 0011, respectively.
Assuming that the code of the sub-region represented by the hexagon directly below the central hexagon in fig. 2 is 11, and the robot codes corresponding to the sub-region are 01, 10 and 11, the camera codes corresponding to the sub-region are 1101, 1110 and 1111, respectively.
It can be seen that the camera codes corresponding to the sub-region 11 and the sub-region 00 (i.e. the adjacent sub-regions) can be made different by the way of the region code plus the robot code.
Here, the construction method of the camera code is only exemplarily described, and in practical applications, other methods may be adopted, which are merely exemplary and are not specifically limited.
Referring to fig. 3, fig. 3 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present application. The method can be applied to the server shown in fig. 1.
First, the timing for executing the robot control method will be described. In the present application, the server may execute the robot control method when the robot approaches the boundary of the area.
In order to meet the requirement, in an optional implementation manner, the robot detects whether the distance between the current position of the robot and the area boundary of the next area is within a preset range, and if the distance between the current position of the robot and the area boundary of the next area is within the preset range, the robot reports the current position information of the robot to the central server. And the central server executes the robot control method after receiving the current position information reported by the robot.
In another alternative implementation, the robot may periodically report its current position. When receiving the current position reported by the robot, the central server can detect whether the path from the current position of the robot to the area boundary is within a preset range. And if the path from the current position of the robot to the zone boundary is in a preset range, executing the anti-interference method.
Here, the timing of executing the robot control method is only described as an example, but in an actual application, the server may execute the robot control method when receiving an interference-free command transmitted from the outside. And is not particularly limited herein.
Next, the robot control method provided in the present application will be described in detail.
The robot control method includes:
step 301: and the service end predicts the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the acquired motion information of the robot.
Step 301 will be described in detail below with reference to steps 3011 to 3013.
Step 3011: and the server receives the current position reported by the robot.
When the robot is realized, the robot can report the robot information to the server periodically or when the distance from the robot to the boundary of the area is detected to be within the preset distance range. The robot information includes: the current position of the robot, whether the robot reaches a task target, the current movement direction and speed of the robot and the like. Here, the robot information is merely exemplified and not particularly limited.
The server can extract the current position information of the robot from the robot information reported by the robot.
Step 3012: the service end obtains the motion information of the robot.
In an alternative implementation manner, the motion information may include: path information of the robot reaching the task goal, a velocity of the robot.
Specifically, after the robot receives a certain task, the server may plan a path and a movement rate of the robot to the task target for the robot. For example, if the robot receives a transfer task, the path and motion rate of the robot to the transfer table may be planned for the robot. For example, if the robot receives a charging task, a path and a movement rate to the charging pile can be planned for the robot. And is not particularly limited herein.
Therefore, the service end records the robot identification and the corresponding relation between the path planned for the robot to the current task target and the motion rate. When the motion information of the robot is obtained, the server can search the path information and the motion rate corresponding to the robot in the corresponding relationship, and use the searched path information and motion rate as the motion information of the robot.
Step 3013: and the service end predicts the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the acquired motion information of the robot.
When the method is implemented, the server side can determine the next sub-area to be reached by the robot and determine the area boundary between the sub-area and the next sub-area based on the path information of the robot to the task target and the current position information of the robot.
Then, the server side can determine the distance from the current position to the boundary of the area based on the path information of the robot reaching the task target.
Finally, the server side can determine the boundary crossing time when the robot reaches the boundary of the area based on the distance and the movement rate of the robot. For example, the server may divide the distance by the movement rate to obtain the boundary crossing time when the robot reaches the boundary of the area.
For example, as shown in fig. 4, each hexagon in fig. 4 represents a sub-area, and the path of the robot to reach the task target is assumed to be represented by a broken line with arrows in fig. 4. As can be seen from fig. 4, the robot will span sub-areas 1, 2, 3 and 4. Wherein, the point B in the sub-area 4 is a task target, and the point A is the current position of the robot.
In implementation, the server may determine that the current sub-region is region 2, the next sub-region is region 3, and the region boundary is the boundary between region 2 and region 3 based on the path information (i.e., the broken line with an arrow in fig. 4) and the current position information (i.e., point a) of the robot.
Then, the server may determine the distance (i.e., line segment AC in fig. 4) from the current position (i.e., point a) to the boundary of the area (i.e., the boundary of area 2 and area 3) based on the path information (i.e., the broken line with an arrow in fig. 4) of the robot to reach the task target.
The server then determines the moment of cross-border when the robot reaches the zone boundary based on the distance (i.e., line segment AC in fig. 4) and the rate of movement of the robot.
Step 302: and the server determines whether the number of the robots in the next sub-area reaches a preset threshold value at the moment of crossing.
The preset threshold value is related to the number of TOF cameras which work at the same time and do not interfere with each other and are supported by the TOF camera work mode to the maximum extent. In other words, the predetermined threshold is related to N, and the TOF cameras operate in a manner that at most supports the simultaneous operation of N TOF cameras without interfering with each other.
For example, assuming that the operation mode of the TOF camera is a time division multiplexing mode, and the time division multiplexing mode can support 3 TOF cameras to operate simultaneously at maximum without mutual interference, the preset threshold may be 3. The preset threshold value is merely exemplary and is not particularly limited.
In the implementation of step 302, the robot information of each sub-area is maintained on the server. The robot information comprises a robot identification and a camera code allocated to the robot.
When the robot enters a sub-area, the identification of the robot and the camera code allocated to the robot are added to the robot information corresponding to the sub-area.
When the robot leaves a sub-area, the robot identifier and the camera code allocated to the robot are deleted from the robot information table corresponding to the sub-area.
Based on the above, when the cross-border time is reached, the server side can count the number of robots in the next sub-area based on the robot information in the next sub-area. Then, the server may detect whether the number of robots in the next sub-area reaches a preset threshold.
Step 303: and if not, the server side issues the camera code which is not occupied and corresponds to the next sub-region at the time of crossing the boundary to the robot, so that the robot determines the anti-interference parameter corresponding to the camera code after reaching the next sub-region, and controls the TOF camera deployed on the robot to work based on the anti-interference parameter.
When the method is realized, the corresponding relation between each sub-region identifier and the camera code corresponding to each sub-region is maintained on the server.
Each sub-region identifies a corresponding at least one camera code. Each camera code has an occupied state and an idle state. The occupied state indicates that the camera code has been assigned to a TOF camera deployed on a robot in or about to enter the sub-region, and the idle state indicates that the camera code has not been assigned.
The camera encoding state is continuously updated.
For example, when a robot moves from a first area to a second area, the server will recycle the camera code of the robot in the first area, that is, set the camera code of the robot in the first sub-area to an idle state.
For another example, after the server allocates the camera code of the second sub-area to the robot, the server may set the camera code of the second sub-area allocated to the robot to the occupied state.
The correspondence between the sub-region identifiers and the camera codes at a certain time is shown in table 1, for example.
Figure BDA0003571603060000151
TABLE 1
In step 302, when the boundary crossing time arrives, the server may determine, from the correspondence shown in table 1, a camera code in an idle state corresponding to the next sub-region, as an unoccupied camera code corresponding to the next sub-region at the boundary crossing time. Then, the server can issue the determined camera code to the robot.
Certainly, in actual application, the server may also determine and issue an unoccupied camera code corresponding to the next sub-region at the boundary crossing time within a preset time period before the boundary crossing time arrives. For example, the server may search the camera code in the idle state corresponding to the next sub-region in the correspondence shown in table 1 within a preset time period before the boundary crossing time arrives, and issue the camera code as the camera code corresponding to the next sub-region and not occupied at the boundary crossing time to the robot. Here, the timing of issuing the camera code corresponding to the next sub-region at the time of crossing the boundary and being unoccupied to the robot is only exemplarily described, and is not specifically limited.
In addition, in the embodiment of the application, if it is determined that the number of robots in the next sub-area reaches the preset threshold value at the time of crossing the boundary, it indicates that the number of robots that can be carried by the next sub-area is exceeded if the next sub-area re-enters a new robot, so that TOF cameras deployed by the robots in the next sub-area interfere with each other. Therefore, in this case, the server may control the robot to stop moving. Then, the server monitors the number of the robots in the next sub-region in real time, when the number of the robots in the next sub-region is monitored to be lower than the preset threshold value, the robots are triggered to move, unoccupied camera codes corresponding to the next sub-region are distributed to the robots, so that after the robots reach the next sub-region, anti-interference parameters corresponding to the camera codes are determined, and TOF cameras deployed on the robots are controlled to work based on the anti-interference parameters.
In addition, in the embodiment of the application, when the server detects that the robot enters the next sub-area, the server recovers the camera code of the robot in the sub-area.
In an alternative implementation manner, when the robot moves to the boundary of the area, the robot sends a notification of reaching the boundary of the area to the server, and after receiving the notification again, the server can determine that the robot enters the next sub-area, and at this time, the server can recover the camera code of the robot in the area.
In an optional recycling manner, the server may set a camera code of the robot in the region to an idle state. The recovery method is only exemplified here, and is not particularly limited.
Referring to fig. 5, fig. 5 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present application, where the method is applicable to a robot and may include the following steps:
step 501: the robot reports the current position information of the robot to a server, so that the server predicts the boundary crossing time when the robot reaches the zone boundary of the sub-zone and the next sub-zone based on the reported current position information and the obtained motion information of the robot, and predicts and issues the unoccupied camera codes corresponding to the next sub-zone at the boundary crossing time when the number of robots in the next sub-zone at the boundary crossing time is determined to be less than a preset threshold value.
In an optional implementation manner, the robot detects whether the distance between the current position of the robot and the area boundary of the next area is within a preset range, and if the distance between the current position of the robot and the area boundary of the next area is within the preset range, the robot reports the current position information of the robot to the central server. In this way, the central server executes the robot control method after receiving the current position information reported by the robot.
In another alternative implementation, the robot may periodically report its current position. In this way, when receiving the current position reported by the robot, the central server may detect whether the path from the current position of the robot to the area boundary is within a preset range. And if the path from the current position of the robot to the area boundary is in a preset range, executing the robot control method.
The reporting period can be preset, and when the reporting period is set to be small enough, the robot can report the current position information to the central server in a near real-time mode.
Step 502: and the robot receives the camera code issued by the server.
Step 503: and the robot determines anti-interference parameters corresponding to the camera codes and controls a TOF camera on the robot to work based on the anti-interference parameters after the robot reaches the next sub-region.
Several ways of implementing step 503 are described below.
The first mode is as follows: the working mode of the TOF camera on the robot is a time division multiplexing mode. The time division multiplexing mode is as follows: different TOF cameras are enabled to emit light at different moments, so that the light emitted by the different TOF cameras is distinguished, and anti-interference is achieved.
When the working mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter is luminous time delay. Because the time division multiplexing method needs to accurately control the light emitting time of each TOF camera, the central server needs to be synchronous with the TOF camera clocks deployed on each robot, and the TOF camera clocks deployed on the central server and each robot are ensured to be consistent.
In implementing step 503, the robot may determine the light-emitting time delay corresponding to the camera code assigned to the robot. Wherein different camera codes correspond to different lighting delays.
The robot may then send the light emission time delay to a TOF camera deployed on the robot, with the TOF camera emitting light according to the light emission time delay. For example, the TOF camera emits light periodically, and the TOF camera emits light after delaying the emission time when reaching the emission time.
For example, as shown in FIG. 4, zone 1 and zone 2 are adjacent. There are two robots in the area 1, robot 1 and robot 2 respectively, the camera codes assigned to robot 1 are 0100 respectively, and the camera code assigned to robot 2 is 0111. In the area 2, there is a robot 3, and the mobile machine 3 is assigned a camera code 1011.
Assume that 0100 corresponds to a light emission delay of 5s, 0111 corresponds to a light emission delay of 10s, and 1011 corresponds to a light emission delay of 15 s. Assuming that the TOF cameras on the three robots all emit light every 30 seconds, in the case of no time delay, the TOF cameras on the three robots all emit light at 0s, 30s, 60s, and so on, and since the three TOF cameras emit light at the same time, interference is caused.
In the present application, the TOF camera corresponding to 0100 can emit light at light emission timings of 5s, 35s, 65s, and the like. The TOF camera corresponding to 0111 can emit light at the light emission timings of 10s, 40s, 70s, and so on. The TOF camera corresponding to 1011 can emit light at the light emission timings of 15s, 45s, 75s, and the like. Therefore, each TOF camera emits light at different time through different light-emitting time delays, and the problem of interference of multiple TOF cameras is solved.
The second way is: the working mode of the TOF camera on the robot is a frequency division multiplexing mode. The frequency division multiplexing mode is as follows: different TOF cameras modulate light to be emitted using different modulation frequencies and emit the modulated light. Since the modulation frequencies of the light emitted by the respective TOF cameras are different, each TOF camera can identify whether the received light is reflected light of the light emitted by itself or light emitted by other TOF cameras based on the modulation frequencies of the light, thereby overcoming the problem of interference between the TOF cameras.
And when the working mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter is a modulation frequency.
In implementing step 302, the robot may determine a modulation frequency corresponding to the camera code assigned by the robot. Wherein, the modulation frequencies corresponding to different camera codes are different.
The robot may then transmit the determined modulation frequency to a TOF camera deployed on the robot to cause the TOF camera to transmit light modulated by the adjusted frequency. Specifically, the TOF camera may modulate the light to be generated with the determined modulation frequency before emitting light, and emit the light modulated with the modulation frequency.
The third mode is that: the operation mode of the TOF camera on the robot is a code division multiplexing mode. The code division multiplexing mode is as follows: and different TOF cameras adopt different code words to encode the laser pulse to be transmitted and transmit the encoded laser pulse. Because the codes of the laser pulses emitted by the TOF cameras are different, the TOF cameras can distinguish whether locally received light is reflected light of light emitted by the TOF cameras or light emitted by other TOF cameras based on the codes of the laser pulses, so that the interference problem of multiple TOF cameras can be overcome.
When the working mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameters can be coded by the camera.
In implementing step 302, the robot may send the camera code to a TOF camera on the robot. And the TOF camera takes the camera code as a code word of the laser pulse code, codes the laser pulse to be transmitted according to the code word and sends the coded laser pulse.
As can be seen from the above description, the motion area of the robot is divided into a plurality of sub-areas, and each sub-area adopts the existing anti-interference mode. On the one hand, the server side ensures that the number of TOF cameras in each sub-area is smaller than the maximum number of TOF cameras supported by the existing anti-interference mode by controlling the motion of the robot, and ensures that a plurality of TOF cameras in each sub-area do not interfere with each other. On the other hand, different camera codes are configured for adjacent sub-regions, so that the TOF cameras of the adjacent sub-regions do not interfere with each other. Based on the two aspects, the interference resistance of the largest number of TOF cameras supported by the existing interference resistance method and the interference resistance of the TOF cameras in the adjacent areas can be realized in each sub-area, so that the problem of the interference of a large number of TOF cameras is solved.
Referring to fig. 6, fig. 6 is a hardware structure diagram of an electronic device according to an exemplary embodiment of the present application;
the electronic device includes: a communication interface 601, a processor 602, a machine-readable storage medium 603, and a bus 604; wherein the communication interface 601, the processor 602, and the machine-readable storage medium 603 communicate with each other via a bus 604. The processor 602 may perform the robot control method described above by reading and executing machine executable instructions in the machine readable storage medium 603 corresponding to the robot control logic.
The machine-readable storage medium 603 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 603 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
The electronic device may be the server or the robot, and the electronic device is only described as an example and is not particularly limited.
Referring to fig. 7, fig. 7 is a block diagram of a robot control apparatus according to an exemplary embodiment of the present application.
The device is applied to a server, a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-areas are different; each subarea has a different camera code assigned to the robot in the subarea, and the device comprises:
the prediction unit 701 is configured to predict, based on current position information reported by the robot and obtained motion information of the robot, a boundary crossing time when the robot reaches a boundary between a current sub-area and a next sub-area;
a determining unit 702, configured to determine whether the number of robots in the next sub-area reaches a preset threshold at the time of the boundary crossing;
and the issuing unit 703 is configured to issue, if not, the unoccupied camera code corresponding to the next sub-region at the time of crossing the boundary to the robot, so that after the robot reaches the next sub-region, the anti-interference parameter corresponding to the camera code is determined, and the TOF camera deployed on the robot is controlled to work based on the anti-interference parameter.
Optionally, the issuing unit 703 is further configured to control the robot to stop moving if the robot is in motion; and when the number of the robots in the next sub-region is monitored to be lower than the preset threshold value, triggering the robots to move, and allocating unoccupied camera codes corresponding to the next sub-region to the robots, so that after the robots reach the next sub-region, anti-interference parameters corresponding to the camera codes and matched with a locally configured anti-interference method are determined, and TOF cameras deployed on the robots are controlled to work based on the anti-interference parameters.
Optionally, the issuing unit 703 is further configured to, when it is detected that the robot enters the next sub-area, recover the camera code of the robot in the sub-area.
Optionally, the size of the sub-region is related to a distance threshold at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise light-emitting time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
and under the condition that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameters comprise camera codes.
Optionally, the divided sub-regions are arranged in a honeycomb shape.
Optionally, the predicting unit 701 is further configured to, if the current position information is reported by the robot when it is determined that a distance from the current position to the area boundary of the robot is within a preset range, execute the robot control method when receiving the current position information reported by the robot; or if the current position information is periodically reported by the robot, executing the robot control method when the distance from the current position to the area boundary of the robot is determined to be within a preset range.
Optionally, the motion information includes: the path information of the robot reaching the task target and the movement rate of the robot;
the predicting unit 701 is configured to, when predicting a boundary crossing time when the robot reaches a boundary between a current sub-area and a next sub-area based on current position information reported by the robot and acquired motion information of the robot, determine the next sub-area to which the robot will reach based on path information when the robot reaches a task target, the current position information, and the motion area division condition, and determine an area boundary between the current sub-area and the next sub-area; determining a distance from a current position to the boundary of the area based on the path information of the robot reaching the task target and the current position information; and determining the boundary crossing time when the robot reaches the boundary of the area based on the distance and the movement speed of the robot.
Referring to fig. 8, fig. 8 is a block diagram of another robot control apparatus according to an exemplary embodiment of the present application.
The device is applied to a robot equipped with a TOF camera; the motion area of the robot is divided into at least one subarea; each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to the adjacent sub-areas are different; each subarea has a different camera code assigned to the robot in the subarea, and the device comprises:
a reporting unit 801, configured to report current position information of the robot to a server, so that the server predicts a boundary crossing time when the robot reaches a boundary between the current sub-area and a next sub-area based on the reported current position information and acquired motion information of the robot, and when it is determined that the number of robots in the next sub-area at the boundary crossing time does not reach a preset threshold, the server predicts and issues an unoccupied camera code corresponding to the next sub-area at the boundary crossing time;
a receiving unit 802, configured to receive a camera code issued by the server;
and the determining unit 803 is configured to determine an anti-interference parameter corresponding to the camera code, and control a TOF camera on the robot to work based on the anti-interference parameter after the next sub-region is reached.
Optionally, when the working mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter includes a light-emitting time delay; the server is synchronous with TOF camera clocks deployed on the robots;
the determining unit 803 is configured to determine a light-emitting time delay corresponding to the camera code when determining an anti-interference parameter corresponding to the camera code and controlling a TOF camera on the robot to work based on the anti-interference parameter after reaching a next sub-region; wherein, the light-emitting time delays corresponding to different camera codes are different; and sending the light-emitting time delay control to a TOF camera deployed on the robot so that the TOF camera emits light according to the light-emitting time delay.
Optionally, in a case that the operation mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter includes a modulation frequency;
the determining unit 803 is configured to determine the modulation frequency corresponding to the camera code when determining the anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-region; wherein, the modulation frequencies corresponding to different camera codes are different; sending the modulation frequency to a TOF camera deployed by the robot to enable the TOF camera to send light modulated by the modulation frequency.
Optionally, in a case that the operation mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code;
the determining unit 803, when determining the anti-interference parameter corresponding to the camera code and matching with the locally configured anti-interference method, and controlling the TOF camera on the robot to work based on the anti-interference parameter, is configured to send the camera code to the TOF camera on the robot, so that the TOF camera takes the camera code as a code word of the laser pulse code, codes the laser pulse to be transmitted according to the code word, and sends the coded laser pulse.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (14)

1. The robot control method is applied to a server, a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-areas are different; each sub-region has a different camera code assigned to the robot within the sub-region, the method comprising:
predicting the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot;
determining whether the number of robots in the next sub-area reaches a preset threshold at the moment of crossing;
and if not, sending the camera code which is not occupied and corresponds to the next sub-region at the time of crossing the boundary to the robot, so that the robot determines the anti-interference parameter corresponding to the camera code after reaching the next sub-region, and controlling the TOF camera deployed on the robot to work based on the anti-interference parameter.
2. The method of claim 1, further comprising:
if so, controlling the robot to stop moving;
when the number of the robots in the next sub-area is monitored to be lower than the preset threshold value, the robots are triggered to move, unoccupied camera codes corresponding to the next sub-area are distributed to the robots, so that after the robots reach the next sub-area, anti-interference parameters which correspond to the camera codes and are matched with a locally configured anti-interference method are determined, and TOF cameras deployed on the robots are controlled to work based on the anti-interference parameters.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and when the robot is detected to enter the next subarea, recovering the camera code of the robot in the subarea.
4. The method of claim 1, wherein the size of the sub-region is related to a distance threshold at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise light-emitting time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
and under the condition that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameters comprise camera codes.
5. The method of claim 1, wherein the partitioned sub-regions are arranged in a honeycomb pattern.
6. The method of claim 1,
if the current position information is reported by the robot when the distance from the current position to the area boundary is determined to be within a preset range, executing the robot control method when the current position information reported by the robot is received;
or,
and if the current position information is periodically reported by the robot, executing the robot control method when the distance from the current position to the zone boundary of the robot is determined to be within a preset range.
7. The method of claim 1, wherein the motion information comprises: the path information of the robot reaching the task target and the motion rate of the robot;
predicting the boundary-crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot, wherein the step comprises the following steps:
determining a next sub-area to be reached by the robot based on the path information of the robot reaching the task target, the current position information and the division condition of the motion area, and determining the area boundary of the sub-area and the next sub-area;
determining a distance from a current position to the boundary of the area based on the path information of the robot reaching the task target and the current position information;
and determining the boundary crossing time when the robot reaches the boundary of the area based on the distance and the movement speed of the robot.
8. A robot control method, characterized in that the method is applied to a robot provided with a TOF camera; the motion area of the robot is divided into at least one subarea; each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to the adjacent sub-areas are different; each sub-region has a different camera code assigned to the robot within the sub-region, the method comprising:
reporting the current position information of the robot to a server, so that the server predicts the boundary crossing time when the robot reaches the boundary of the sub-region and the next sub-region based on the reported current position information and the obtained motion information of the robot, and predicts and issues the unoccupied camera codes corresponding to the next sub-region at the boundary crossing time when the number of robots in the next sub-region at the boundary crossing time is determined to be less than a preset threshold value;
receiving a camera code issued by the server;
and determining an anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after the robot reaches the next sub-region.
9. The method of claim 8,
under the condition that the working mode of a TOF camera on the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay; the server is synchronous with TOF camera clocks deployed on the robots;
the determining of the anti-interference parameter corresponding to the camera code and controlling the operation of the TOF camera on the robot based on the anti-interference parameter after the anti-interference parameter reaches the next sub-region comprises:
determining a light-emitting time delay corresponding to the camera code; wherein, the light-emitting time delays corresponding to different camera codes are different;
and sending the light-emitting time delay control to a TOF camera deployed on the robot so that the TOF camera emits light according to the light-emitting time delay.
10. The method of claim 8,
under the condition that the working mode of a TOF camera on the robot is a frequency division mode, the anti-interference parameters comprise modulation frequency;
the determining of the anti-interference parameter corresponding to the camera code and controlling the operation of the TOF camera on the robot based on the anti-interference parameter after the anti-interference parameter reaches the next sub-region comprises:
determining a modulation frequency corresponding to the camera code; wherein, the modulation frequencies corresponding to different camera codes are different;
sending the modulation frequency to a TOF camera deployed by the robot to enable the TOF camera to send light modulated by the modulation frequency.
11. The method of claim 8,
under the condition that the working mode of a TOF camera on the robot is a code division multiplexing mode, the anti-interference parameters comprise camera codes;
the determining of the anti-interference parameters corresponding to the camera codes and matched with the locally configured anti-interference method and controlling the operation of the TOF camera on the robot based on the anti-interference parameters comprise:
and sending the camera code to a TOF camera on the robot, so that the TOF camera takes the camera code as a code word of laser pulse code, codes the laser pulse to be transmitted according to the code word, and sends the coded laser pulse.
12. The robot control device is applied to a server side, a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-areas are different; each subregion is different for the camera code of robot assignment in this subregion, the device includes:
the prediction unit is used for predicting the boundary crossing time when the robot reaches the boundary of the current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot;
the determining unit is used for determining whether the number of the robots in the next sub-area reaches a preset threshold value at the moment of crossing the boundary;
and the issuing unit is used for issuing the camera codes which are not occupied and correspond to the next sub-region at the time of crossing the boundary to the robot if the camera codes are not occupied, so that the anti-interference parameters corresponding to the camera codes are determined after the robot reaches the next sub-region, and the TOF cameras deployed on the robot are controlled to work based on the anti-interference parameters.
13. A robot control apparatus, characterized in that the apparatus is applied to a robot equipped with a TOF camera; the motion area of the robot is divided into at least one subarea; each sub-area corresponds to at least one preset camera code, and the camera codes corresponding to the adjacent sub-areas are different; each subarea has a different camera code assigned to the robot in the subarea, and the device comprises:
the reporting unit is used for reporting the current position information of the robot to the server, so that the server predicts the boundary crossing time when the robot reaches the zone boundary of the local sub-zone and the next sub-zone based on the reported current position information and the obtained motion information of the robot, and when the number of robots in the next sub-zone does not reach a preset threshold value at the boundary crossing time, the server predicts and issues the unoccupied camera codes corresponding to the next sub-zone at the boundary crossing time;
the receiving unit is used for receiving the camera code issued by the server;
and the determining unit is used for determining an anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to work based on the anti-interference parameter after the robot reaches the next sub-region.
14. An electronic device, comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any one of claims 1-11.
CN202210325814.7A 2022-03-29 2022-03-29 Robot control method, device and equipment Active CN114643580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210325814.7A CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210325814.7A CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Publications (2)

Publication Number Publication Date
CN114643580A true CN114643580A (en) 2022-06-21
CN114643580B CN114643580B (en) 2023-10-27

Family

ID=81995523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210325814.7A Active CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Country Status (1)

Country Link
CN (1) CN114643580B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638692A (en) * 2011-01-31 2012-08-15 微软公司 Reducing interference between multiple infra-red depth cameras
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN108718453A (en) * 2018-06-15 2018-10-30 合肥工业大学 A kind of subregion network-building method under highly dense WLAN scenes
CN109459738A (en) * 2018-06-06 2019-03-12 杭州艾芯智能科技有限公司 A kind of more TOF cameras mutually avoid the method and system of interference
US20220091224A1 (en) * 2019-02-01 2022-03-24 Terabee Sas A Spatial Sensor Synchronization System Using a Time-Division Multiple Access Communication System

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638692A (en) * 2011-01-31 2012-08-15 微软公司 Reducing interference between multiple infra-red depth cameras
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN109459738A (en) * 2018-06-06 2019-03-12 杭州艾芯智能科技有限公司 A kind of more TOF cameras mutually avoid the method and system of interference
CN108718453A (en) * 2018-06-15 2018-10-30 合肥工业大学 A kind of subregion network-building method under highly dense WLAN scenes
US20220091224A1 (en) * 2019-02-01 2022-03-24 Terabee Sas A Spatial Sensor Synchronization System Using a Time-Division Multiple Access Communication System

Also Published As

Publication number Publication date
CN114643580B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US20200293063A1 (en) Travel planning system, travel planning method, and non-transitory computer readable medium
US11906971B2 (en) Spatiotemporal robotic navigation
JP6756849B2 (en) Emergency stop control methods and devices for a large number of robots
EP2974553B1 (en) Systems and methods for self commissioning and locating lighting system
US10046458B2 (en) System of confining robot movement actions and a method thereof
US8346468B2 (en) Method and apparatus for collision avoidance
JP3910349B2 (en) Directional antenna control method and apparatus
EP3816888A2 (en) Travel control device, travel control method, travel control system and computer program
KR20180096703A (en) METHOD AND APPARATUS FOR RETURN TO ROBOT
CN105446343A (en) Robot scheduling method and apparatus
US11586221B2 (en) Travel control device, travel control method and computer program
JP2001199505A (en) Plural independent highly functional pickers having dynamic route assignment in automated data storage library
CN105376083A (en) Energy-saving control method, management server and network equipment
WO2021023377A1 (en) Technique for updating a positioning configuration
WO2019141219A1 (en) Method and system for scheduling multiple mobile robots
US11553452B2 (en) Positioning control method and device, positioning system and storage medium
CN106257561B (en) Parking lot sensor, control method thereof and parking system
EP3625739A1 (en) Apparatus and method foe warehouse zoning
CN109683556B (en) Cooperative work control method and device for self-moving equipment and storage medium
CN108108850A (en) A kind of telecontrol equipment and its pathfinding control method and the device with store function
Zhang et al. Increasing traffic flows with dsrc technology: Field trials and performance evaluation
CN114643580A (en) Robot control method, device and equipment
CN114348516B (en) Material box checking method, device, scheduling equipment, warehousing system and storage medium
CN112702431B (en) Industrial mobile equipment positioning system and method based on distributed edge calculation
CN111243327B (en) Vehicle guiding method and system based on computer monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant