CN114373148A - Cloud robot mapping method, system, equipment and storage medium - Google Patents

Cloud robot mapping method, system, equipment and storage medium Download PDF

Info

Publication number
CN114373148A
CN114373148A CN202111602631.7A CN202111602631A CN114373148A CN 114373148 A CN114373148 A CN 114373148A CN 202111602631 A CN202111602631 A CN 202111602631A CN 114373148 A CN114373148 A CN 114373148A
Authority
CN
China
Prior art keywords
robot
data
cloud server
map
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111602631.7A
Other languages
Chinese (zh)
Inventor
陈莹
马世奎
董文锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202111602631.7A priority Critical patent/CN114373148A/en
Publication of CN114373148A publication Critical patent/CN114373148A/en
Priority to PCT/CN2022/106943 priority patent/WO2023115927A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a method, a system, equipment and a storage medium for drawing a cloud robot. In the image building system of the cloud robot, the robot can interact with the terminal device through the cloud server, so that the terminal device can receive real-time video data sent by the robot deployed in an image space to be built, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.

Description

Cloud robot mapping method, system, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of robots, in particular to a cloud robot image building method, a cloud robot image building system, cloud robot image building equipment and a storage medium.
Background
In the field of robots, in order to implement an automatic walking function of a robot in an actual physical space, a map of the physical space needs to be created in advance based on a map construction method. For example, a SLAM map of the physical space may be created based on a SLAM (simultaneous localization and mapping) method for use by the robot.
In the prior art, after the robot is deployed to a service place (i.e. a physical space), the robot is usually controlled manually to perform scanning and mapping operations. The labor cost of this approach is high. Therefore, a solution is urgently needed.
Disclosure of Invention
The embodiment of the application provides a mapping method, a mapping system, mapping equipment and a storage medium for a cloud robot, which are used for sensing and controlling the robot at a far end and reducing the labor cost.
The embodiment of the application provides a mapping method for a cloud robot, which is suitable for a terminal device and comprises the following steps: determining a robot deployed in a target space of a to-be-mapped image; receiving real-time video data of the target space, which are sent by the robot in real time through a cloud server, and outputting the real-time video data; and responding to robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server so as to remotely control the action of the robot for acquiring the mapping data in the target space.
Further optionally, sending, by the cloud server, a corresponding control instruction to the robot in response to a robot control operation initiated according to the real-time video data includes: responding to motion control operation initiated according to the real-time video data in any target direction, and sending a motion control instruction to the robot through the cloud server; the motion control instructions include: a forward command, a reverse command, a turn command, or a stop command in the target direction.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring a map of the target space from the cloud server; the map is generated by the cloud server according to the mapping data acquired by the robot; and responding to the editing operation aiming at the map, updating the map, and sending the updated map to the cloud server.
Further optionally, the map is updated in response to an editing operation on the map, including at least one of: updating obstacle information at a corresponding position on the map in response to an obstacle updating operation on the map; the obstacle updating operation includes: an obstacle deletion operation, an obstacle addition operation, or an obstacle movement operation; marking the identification of the interest point on the map at the position corresponding to the interest point in response to any interest point marking operation on the map; responding to the operation of setting a virtual wall at any position on the map, and drawing the virtual wall at the corresponding position on the map; in response to a marking operation for any area on the map, an area mark is added for the area on the map.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring mapping data of the target space from the cloud server; the mapping data are collected from the target space by the robot and uploaded to the cloud server; displaying the mapping data on the same screen with the real-time video data of the target space so as to control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data.
Further optionally, the method for creating a map by the cloud robot further includes: receiving device monitoring data sent by the robot through the cloud server; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and when the equipment monitoring data indicate that the robot needs operation and maintenance processing, outputting an operation and maintenance prompt message.
The embodiment of the application further provides a mapping method for the cloud robot, which is applicable to the robot and comprises the following steps: the method comprises the steps that collected real-time video data of a target space are sent to terminal equipment in real time through a cloud server; receiving a control instruction sent by the terminal equipment through the cloud server; the control instruction is sent according to the real-time video data; acquiring mapping data in the target space according to the control instruction; and sending the collected mapping data to the cloud server so that the cloud server creates a map of the target space according to the mapping data.
Further optionally, the mapping data includes: at least one of pose data, ranging data, mileage data, laser point cloud data, collision data, image data, and fall detection data of the robot.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring own equipment monitoring data; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and sending the equipment monitoring data to the terminal equipment through the cloud server so as to carry out operation and maintenance processing on the robot according to the equipment monitoring data.
Further optionally, the method for creating a map by the cloud robot further includes: in the process of collecting mapping data in the target space, the motion track of the robot in the target space is sent to the cloud server, so that the cloud server generates the motion track required by the robot to execute a task in the target space according to the motion track.
The embodiment of the application further provides a picture building system of cloud robot, including: the system comprises a robot, a cloud server and terminal equipment; the terminal equipment establishes communication connection with the robot through the cloud server; wherein, the terminal equipment is mainly used for: receiving real-time video data of a target space, which is sent by the robot in real time through a cloud server, and outputting the real-time video data; responding to robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server; the robot is mainly used for sending the acquired real-time video data of the target space to the terminal equipment in real time through a cloud server; receiving a control instruction sent by the terminal equipment through the cloud service; acquiring mapping data in the target space according to the control instruction; and sending the collected mapping data to the cloud server so that the cloud server creates a map of the target space according to the mapping data.
An embodiment of the present application further provides a terminal device, including: a memory, a processor, a communication component, and a display component; wherein the memory is to: storing one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the method described in the embodiments of the present application are performed.
An embodiment of the present application further provides a robot apparatus, including: a memory, a processor, and a communication component; wherein the memory is to: storing one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the method described in the embodiments of the present application are performed.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the method according to the embodiments of the present application.
In the method, the system, the equipment and the storage medium for drawing the cloud robot, the robot can interact with the terminal equipment through the cloud server, so that the terminal equipment can receive real-time video data sent by the robot deployed in a space to be drawn, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a drawing establishing system of a cloud robot according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for creating a diagram of a cloud robot on a terminal device side according to an exemplary embodiment of the present application;
fig. 3 is a schematic flowchart of a method for creating a diagram of a cloud robot on a robot device side according to another exemplary embodiment of the present disclosure;
fig. 4 is a schematic diagram of a terminal device according to an exemplary embodiment of the present application;
fig. 5 is a schematic diagram of a robot apparatus provided in an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, after a robot is deployed in a service place (i.e. a physical space), the robot is usually controlled manually to perform scanning and mapping operations, which has a high labor cost. In view of the above technical problems, in some embodiments of the present application, a solution is provided, and the technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a cloud robot mapping system according to an exemplary embodiment of the present disclosure, and as shown in fig. 1, a cloud robot mapping system 100 includes: terminal device 10, cloud server 20, and robot 30.
The terminal device 10 may be a device that is held by a robot administrator and is capable of performing remote communication, such as a smart phone, a smart watch, a tablet computer, a laptop computer, and an intelligent wearable device. The terminal device 10 may have a software program running thereon for managing the robot or a browser running thereon for accessing a robot management page.
The cloud server 20 may be implemented as a cloud host, a virtual center of the cloud, an elastic computing instance of the cloud, and the like, which is not limited in this embodiment. The cloud server 20 mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture, and is not described in detail.
In the mapping system 100 of the cloud robot, wireless communication connections can be established between the terminal device 10 and the cloud server 20, and between the robot 30 and the cloud server 20, and the specific communication connection mode can be determined according to different application scenarios. In some embodiments, the wireless communication connection may be implemented based on a Virtual Private Network (VPN) to ensure communication security.
Therein, the robot 30 is deployed in a target space to be mapped, which may be any space in the real world, such as a library, restaurant, hotel, residence, etc., in which the robot may provide services. The robot 30 may perform a sweeping operation in advance to construct a map of the target space before providing the service.
After the robot 30 is deployed, the terminal device 10 may establish a correspondence relationship with the robot 30 in order to facilitate remote control of the scanning operation of the robot 30. The robot manager may first obtain an identification identifier of the robot 30, such as an account name, an identification number ID (Identity document), an IP (Internet Protocol) address, a MAC (Multiple Access Channel) address, and the like; next, the id is input into the terminal device 10, and a request for establishing a binding relationship with the robot 30 is sent to the cloud server 20 through the terminal device 10. After receiving the request, the cloud server 20 may establish a binding relationship of the robot 30 of the terminal device 10, and forward communication data between the terminal device 10 and the robot 30 in a subsequent communication process.
In the cloud robot mapping system 100, the robot 30 may collect real-time video data of a target space, and may send the collected real-time video data of the target space to the terminal device 10 in real time through the cloud server 20. Wherein the real-time video data may be captured by at least one image capture device on the robot 30. The real-time video data may comprise successive frame images captured by at least one image capture device.
Correspondingly, the terminal device 10 may receive real-time video data of the target space sent by the robot 30 in real time through the cloud server 20, and output the real-time video data through the display component and/or the audio component. By the embodiment, a robot manager can observe real-time video data collected by the robot 30 in real time by using the terminal device 10, and can remotely sense the environment where the robot is located so as to control the motion route of the robot in the target space more accurately.
Based on the above steps, the robot manager may initiate a control operation for the robot 30 according to the real-time video data output by the terminal device 10. In response to the control operation, the terminal device 10 may send a corresponding control instruction to the robot 30 through the cloud server 20, so as to perform remote control on the action of the robot for acquiring the mapping data in the target space.
For example, the robot needs to move to the position a to collect data, the robot manager knows that the robot is currently located at the position B through the real-time video data output by the terminal device 10, and the robot manager can send a series of control instructions to enable the robot to move from the position B to the position a to collect data. The series of control instructions may be forwarded to robot 30 via cloud server 20. By the implementation, the robot can be remotely sensed and controlled according to real-time video data.
Correspondingly, the robot 30 may receive a control instruction sent by the terminal device 10 through the cloud server 20. After receiving the control instruction, the robot 30 may collect mapping data in the target space according to the control instruction, and send the collected mapping data to the cloud server 20.
The mapping data refers to data for constructing a map corresponding to the target space, and includes but is not limited to: at least one of pose data, range finding data, mileage data, laser point cloud data, collision data, image data, and fall detection data of the robot.
Wherein, the laser point cloud data can be collected by a laser radar on the robot 30; pose data of the robot can be collected through a pose sensor on the robot 30; the ranging data may be collected by an ultrasonic sensor or an infrared sensor on the robot 30; the mileage data may be collected by a odometer on the robot 30; collision data may be collected by a collision avoidance sensor on the robot 30; the fall detection data may be collected by a fall protection sensor on the robot 30.
Correspondingly, after receiving the mapping data, the cloud server 20 may create a map of the target space according to the mapping data. The map may be implemented as a grid map or a topological map, and the like, which is not limited in this embodiment.
In this embodiment, the robot may interact with the terminal device through the cloud server, so that the terminal device may receive real-time video data sent by the robot deployed in the space in which the image is to be created, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
In some optional embodiments, in response to a robot control operation initiated according to the real-time video data, when sending a corresponding control instruction to the robot through the cloud server 20, the terminal device 10 may send a motion control instruction to the robot 30 through the cloud server 20 in response to a motion control operation initiated according to the real-time video data in any target direction, so that the robot 30 moves and collects the mapping data according to the motion control instruction. Taking a world coordinate system (horizontal axis is x, vertical axis is y) of the robot as an example, the target direction may include at least one of the following: the x-axis positive direction, the y-axis positive direction, the x-axis negative direction, the y-axis negative direction, a 45 ° direction between the x-axis positive direction and the y-axis positive direction, a 45 ° direction between the x-axis positive direction and the y-axis negative direction, a 45 ° direction between the x-axis negative direction and the y-axis negative direction, and a 45 ° direction between the x-axis negative direction and the y-axis positive direction. Wherein the motion control instructions include: a forward command, a reverse command, a turn command, or a stop command in a target direction.
In some optional embodiments, during the process of acquiring the mapping data in the target space by the robot 30, the robot 30 may send the motion trajectory of the robot 30 in the target space to the cloud server 20.
Correspondingly, after receiving the motion trajectory, the cloud server 20 may generate a motion trajectory required by the robot to execute the task in the target space according to the motion trajectory. For example, when the mapping system of the cloud robot is applied to a hotel (the target space is the hotel), the robot may generate a motion trajectory J4 required by the robot 30 to execute a task in the hotel according to the motion trajectory J1 from the room F1 to the room F2, the motion trajectory J2 from the room F2 to the room F3, and the motion trajectory J3 from the room F1 to the room F3. Based on this, in the subsequent task execution scene, when the cleaning robot executes the cleaning task, the cleaning tasks of the room F1, the room F2 and the room F3 can be executed along the movement locus J4; when the meal delivery robot executes the meal delivery task, the meal delivery tasks of the room F1, the room F2 and the room F3 can be executed along the movement locus J4.
By the embodiment, the motion trail required by the robot to execute the task in the target space is generated, so that the robot can execute the task according to the motion trail, and the task execution efficiency of the robot is improved.
In some optional embodiments, after the cloud server 20 creates the map of the target space according to the mapping data collected by the robot 30, the terminal device 10 may obtain the map of the target space from the cloud server 20. After that, the relevant person who edits the map can edit the map. Accordingly, the terminal device 10 can update the map in response to the editing operation for the map, and transmit the updated map to the cloud server 20.
Alternatively, after the terminal device 10 acquires the map, the map may be displayed on the same screen as the real-time video data of the target space. For example, real-time video data is shown on the left side and a map is shown on the right side. And the related personnel who edit the map can edit the map more intuitively according to the comparison result of the map and the real-time video data.
In some optional embodiments, the "updating the map in response to the editing operation on the map" described in the foregoing embodiments includes at least one of:
1) in response to an obstacle updating operation for the map, obstacle information is updated at a corresponding position on the map. The obstacle updating operation includes: an obstacle deletion operation, an obstacle addition operation, or an obstacle movement operation. The obstacle adding operation is taken as an example for explanation, and since a sensor blind area may exist when the robot collects the mapping data, and a part of obstacles in the target space are not generated in the map, a person who edits the map can observe the obstacles which are not generated on the map according to the real-time video data and add the obstacles. The terminal device 10 may generate the obstacle at the corresponding position on the map in response to the operation.
2) And marking the identification of the interest point at the position corresponding to the interest point on the map in response to the marking operation of any interest point on the map. For example, in response to an entertainment point of interest tagging operation, the location corresponding to the television on the map and the location corresponding to the gaming machine may be tagged with an "entertainment" identifier.
3) And responding to the operation of setting the virtual wall at any position on the map, and drawing the virtual wall at the corresponding position on the map. For example, when the target space is a factory, there may be an area with poor safety or an area prohibited from entering in the generated map, and at this time, the editor may set a virtual wall in the area with poor safety on the map according to the actual situation. The terminal device 10 can draw the virtual wall in the less secure area on the map in response to the operation of setting the virtual wall.
4) In response to a marking operation for any one of the areas on the map, an area mark is added to the area on the map. For example, in a scenario in which the target space is a large hotel, the terminal device 10 may add an area mark of "dining area" to the area a in the map in response to a marking operation for the area a on the map; in response to a marking operation for the area B on the map, an area mark of "rest area" is added to the area B in the map.
Through the embodiment, through the mode of displaying on the same screen, relevant personnel editing the map can compare the actual conditions of the map and the target space more intuitively, and can remotely edit the map in a diversified manner so as to optimize the map, thereby improving the accuracy and efficiency of map editing.
In some optional embodiments, in order to improve the mapping quality of the target space, the robot manager may control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data. As will be further explained below.
The terminal device 10 may obtain the mapping data of the target space from the cloud server 20, and may display the mapping data on the same screen as the real-time video data of the target space. For example, real-time video data is shown on the left side and mapping data is shown on the right side. Furthermore, the robot manager can control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data.
Illustratively, the real-time video data shows that the robot is moving straight towards the path L, and since there is an obstacle a located in the blind area of the image capturing device on the left side of the path L, the robot cannot shoot the obstacle a, i.e., the obstacle a cannot be perceived through the real-time video data. Therefore, the obstacle a is omitted from the generated map of the target space. However, the ranging data in the mapping data may indicate that an obstacle a exists on the left side of the robot. By comparing the two data, the operator can control the robot to turn left, so that the image acquisition equipment can shoot the obstacle A. By the embodiment, the robot can be more accurately controlled to move by comparing the real-time video data with the mapping data, and the probability of the robot mistakenly acquiring the mapping data is reduced.
In some scenarios, during the process of collecting the mapping data, the robot 30 may have conditions of insufficient power, downtime, network abnormality, and the like, thereby causing interruption of the mapping process. To ensure that the mapping process is performed successfully, in some embodiments, the robot 30 may be operated remotely. As will be exemplified below.
Optionally, the robot 30 may obtain its own device monitoring data, wherein the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot 30. The abnormal event data can include abnormal situations such as robot falling, robot collision, damage of moving devices and system breakdown. The network status data may include: network latency, network packet loss rate, network jitter, and the like. Wherein the battery data may include: battery life, battery capacity, and current charge of the battery, among others.
After the device monitoring data is acquired, the device monitoring data may be sent to the terminal device 10 through the cloud server 20. Correspondingly, after the terminal device 10 receives the device monitoring data, the operation and maintenance prompting message may be output when the device monitoring data indicates that the robot 30 needs operation and maintenance processing.
Alternatively, the terminal device 10 may compare the device monitoring data with a preset monitoring threshold, or determine whether the robot 30 needs operation and maintenance processing according to other preset rules or based on a machine-learned baseline prediction model.
Alternatively, when the device monitoring data indicates that the robot 30 needs operation and maintenance processing, the terminal device 10 may output an operation and maintenance prompting message in the form of a voice prompt, a vibration prompt, or a text prompt.
Taking the battery data as an example, the electric quantity threshold is 10%, and if the current electric quantity of the battery of the robot 30 is 9%, which is smaller than the electric quantity threshold, it can be determined that the robot 30 needs operation and maintenance processing at this time. The terminal device 10 may output an operation and maintenance processing message of "power is insufficient, please charge in time".
Taking the network state data as an example, the network delay threshold is 30ms, and if the network delay of the robot 30 is 31ms, which is greater than the electric quantity threshold, it can be determined that the robot 30 needs operation and maintenance processing at this time. The terminal device 10 may output an operation and maintenance processing message "the network delay is high, please process in time".
Taking the abnormal event data as an example, the operation and maintenance request is performed when the system is down according to the preset rule, and if the system of the robot 30 is down at this time, it can be determined that the robot 30 needs to perform operation and maintenance processing at this time. The terminal device 10 may output an operation and maintenance processing message "the system is down and please process in time".
Through the implementation mode, the robot 30 does not need to be checked by operation and maintenance personnel, the state of the robot 30 can be deeply sensed through the equipment monitoring data of the robot 30, the robot 30 can be operated and maintained in time, the operation and maintenance process is simplified, and the operation and maintenance efficiency is improved.
In addition to the image creating system of the cloud robot provided in the foregoing embodiments, an embodiment of the present application also provides an image creating method of the cloud robot, and the following description will be made with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating a method for creating a diagram of a cloud robot according to an exemplary embodiment of the present application, where the method, when executed on a terminal device side, may include the steps shown in fig. 2:
step 201, determining the robot deployed in the target space to be mapped.
Step 202, receiving real-time video data of the target space sent by the robot in real time through the cloud server, and outputting the real-time video data.
Step 203, responding to the robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server so as to remotely control the action of the robot for acquiring the mapping data in the target space.
Further optionally, in response to a robot control operation initiated according to the real-time video data, sending a corresponding control instruction to the robot through the cloud server, includes: responding to motion control operation initiated according to the real-time video data in any target direction, and sending a motion control instruction to the robot through the cloud server; the motion control instructions include: a forward command, a reverse command, a turn command, or a stop command in the target direction.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring a map of the target space from the cloud server; the map is generated by the cloud server according to the mapping data acquired by the robot; and responding to the editing operation aiming at the map, updating the map, and sending the updated map to the cloud server.
Further optionally, the map is updated in response to an editing operation on the map, including at least one of: updating the obstacle information at the corresponding position on the map in response to the obstacle updating operation on the map; the obstacle updating operation includes: an obstacle deletion operation, an obstacle addition operation, or an obstacle movement operation; marking the identifier of the interest point on the map at the position corresponding to the interest point in response to the marking operation of any interest point on the map; responding to the operation of setting a virtual wall at any position on the map, and drawing the virtual wall at the corresponding position on the map; in response to a marking operation for any area on the map, an area marker is added to the area on the map.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring mapping data of the target space from the cloud server; the mapping data is collected from the target space by the robot and uploaded to the cloud server; and displaying the mapping data on the same screen with the real-time video data of the target space so as to control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data.
Further optionally, the method for creating a map by the cloud robot further includes: receiving equipment monitoring data sent by the robot through the cloud server; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and when the equipment monitoring data indicate that the robot needs operation and maintenance processing, outputting an operation and maintenance prompt message.
In this embodiment, the robot may interact with the terminal device through the cloud server, so that the terminal device may receive real-time video data sent by the robot deployed in the space in which the image is to be created, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
The embodiment of the application further provides a method for establishing the image of the cloud robot, which is described below with reference to the accompanying drawings.
Fig. 3 is a flowchart illustrating a method for creating a diagram of a cloud robot according to an exemplary embodiment of the present application, where the method, when executed on a robot side, may include the steps shown in fig. 3:
step 301, sending the acquired real-time video data of the target space to the terminal device in real time through the cloud server.
Step 302, receiving a control instruction sent by the terminal device through the cloud server; the control command is sent according to the real-time video data.
And 303, acquiring mapping data in the target space according to the control command.
Step 304, sending the collected mapping data to the cloud server, so that the cloud server creates a map of the target space according to the mapping data.
Further optionally, the mapping data includes: the robot comprises at least one of position and attitude data, ranging data, mileage data, laser point cloud data, collision data, image data and falling detection data.
Further optionally, the method for creating a map by the cloud robot further includes: acquiring own equipment monitoring data; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and sending the equipment monitoring data to the terminal equipment through the cloud server so as to carry out operation and maintenance processing on the robot according to the equipment monitoring data.
Further optionally, the method for creating a map by the cloud robot further includes: in the process of collecting mapping data in the target space, the motion track of the robot in the target space is sent to the cloud server, so that the cloud server generates the motion track required by the robot to execute a task in the target space according to the motion track.
In this embodiment, the robot may interact with the terminal device through the cloud server, so that the terminal device may receive real-time video data sent by the robot deployed in the space in which the image is to be created, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of step 201 to step 203 may be device a; for another example, the execution subject of steps 201 and 202 may be device a, and the execution subject of step 203 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 4 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application, and as shown in fig. 4, the terminal device includes: memory 401, processor 402, and communications component 403.
The memory 401 is used for storing a computer program and may be configured to store other various data to support operations on the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, contact data, phonebook data, messages, pictures, videos, etc.
A processor 402, coupled to the memory 401, for executing the computer program in the memory 401 for: determining a robot deployed in a target space of a to-be-mapped image; receiving real-time video data of the target space, which is sent by the robot in real time through a cloud server, and outputting the real-time video data; and responding to the robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server so as to remotely control the action of the robot for acquiring the mapping data in the target space.
Further optionally, when responding to the robot control operation initiated according to the real-time video data and sending a corresponding control instruction to the robot through the cloud server, the processor 402 is specifically configured to: responding to motion control operation initiated according to the real-time video data in any target direction, and sending a motion control instruction to the robot through the cloud server; the motion control instructions include: a forward command, a reverse command, a turn command, or a stop command in the target direction.
Further optionally, the processor 402 is further configured to: acquiring a map of the target space from the cloud server; the map is generated by the cloud server according to the mapping data acquired by the robot; and responding to the editing operation aiming at the map, updating the map, and sending the updated map to the cloud server.
Further optionally, when the processor 402 responds to the editing operation on the map and updates the map, the processor is specifically configured to: updating the obstacle information at the corresponding position on the map in response to the obstacle updating operation on the map; the obstacle updating operation includes: an obstacle deletion operation, an obstacle addition operation, or an obstacle movement operation; marking the identifier of the interest point on the map at the position corresponding to the interest point in response to the marking operation of any interest point on the map; responding to the operation of setting a virtual wall at any position on the map, and drawing the virtual wall at the corresponding position on the map; in response to a marking operation for any area on the map, an area marker is added to the area on the map.
Further optionally, the processor 402 is further configured to: acquiring mapping data of the target space from the cloud server; the mapping data is collected from the target space by the robot and uploaded to the cloud server; and displaying the mapping data on the same screen with the real-time video data of the target space so as to control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data.
Further optionally, the processor 402 is further configured to: receiving equipment monitoring data sent by the robot through the cloud server; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and when the equipment monitoring data indicate that the robot needs operation and maintenance processing, outputting an operation and maintenance prompt message.
Further, as shown in fig. 4, the terminal device further includes: display components 404, power components 405, audio components 406, and the like. Only some of the components are schematically shown in fig. 4, and it is not meant that the terminal device includes only the components shown in fig. 4.
In this embodiment, the terminal device may interact with the robot through the cloud server, so that the terminal device may receive real-time video data sent by the robot deployed in the space in which the image is to be created, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by the terminal device in the foregoing method embodiments when executed.
Fig. 5 is a schematic structural diagram of a robot device according to an exemplary embodiment of the present disclosure, where the robot device is suitable for the cloud robot mapping system according to the foregoing embodiment. As shown in fig. 5, the robot apparatus includes: memory 501, processor 502, and communication component 503.
The memory 501 is used for storing a computer program and may be configured to store other various data to support operations on the server. Examples of such data include instructions for any application or method operating on the server, contact data, phonebook data, messages, pictures, videos, and so forth.
A processor 502, coupled to the memory 501, for executing computer programs in the memory 501 for: the method comprises the steps that collected real-time video data of a target space are sent to terminal equipment in real time through a cloud server; receiving a control instruction sent by the terminal equipment through the cloud server; the control instruction is sent according to the real-time video data; acquiring mapping data in the target space according to the control command; and sending the collected mapping data to the cloud server so that the cloud server creates a map of the target space according to the mapping data.
Further optionally, the processor 502 is further configured to: acquiring own equipment monitoring data; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot; and sending the equipment monitoring data to the terminal equipment through the cloud server so as to carry out operation and maintenance processing on the robot according to the equipment monitoring data.
Further optionally, the processor 502 is further configured to: in the process of collecting mapping data in the target space, the motion track of the robot in the target space is sent to the cloud server, so that the cloud server generates the motion track required by the robot to execute a task in the target space according to the motion track.
Further, as shown in fig. 5, the robot apparatus further includes: display component 504, power component 505, audio component 506, and the like. Only some of the components are schematically shown in fig. 5, and it is not meant that the robot device comprises only the components shown in fig. 5.
In this embodiment, the robot may interact with the terminal device through the cloud server, so that the terminal device may receive real-time video data sent by the robot deployed in the space in which the image is to be created, output the real-time video data, respond to robot control operation initiated according to the real-time video data, and send a corresponding control instruction to the robot through the cloud server. Furthermore, the robot can collect mapping data according to the control instruction and send the collected mapping data to the cloud server, so that the cloud server can create a map of the target space according to the mapping data. Through the implementation mode, the robot can be subjected to sensing control at a far end through the terminal equipment, and the robot does not need to be subjected to scanning control from related personnel to a space where a picture is to be built, so that the labor cost is saved.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be performed by the robot apparatus in the foregoing method embodiments when executed.
The memories of fig. 4 and 5 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The communication components of fig. 4 and 5 described above are configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display assembly of fig. 4 and 5 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio components of fig. 4 and 5 above may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
The power supply components of fig. 4 and 5 described above provide power to the various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A map building method of a cloud robot is suitable for terminal equipment, and is characterized by comprising the following steps:
determining a robot deployed in a target space of a to-be-mapped image;
receiving real-time video data of the target space, which are sent by the robot in real time through a cloud server, and outputting the real-time video data;
and responding to robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server so as to remotely control the action of the robot for acquiring the mapping data in the target space.
2. The method of claim 1, wherein sending, by the cloud server, a corresponding control instruction to the robot in response to the robot control operation initiated according to the real-time video data comprises:
responding to motion control operation initiated according to the real-time video data in any target direction, and sending a motion control instruction to the robot through the cloud server; the motion control instructions include: a forward command, a reverse command, a turn command, or a stop command in the target direction.
3. The method of claim 1, further comprising:
acquiring a map of the target space from the cloud server; the map is generated by the cloud server according to the mapping data acquired by the robot;
and responding to the editing operation aiming at the map, updating the map, and sending the updated map to the cloud server.
4. The method of claim 3, wherein updating the map in response to the editing operation on the map comprises at least one of:
updating obstacle information at a corresponding position on the map in response to an obstacle updating operation on the map; the obstacle updating operation includes: an obstacle deletion operation, an obstacle addition operation, or an obstacle movement operation;
marking the identification of the interest point on the map at the position corresponding to the interest point in response to any interest point marking operation on the map;
responding to the operation of setting a virtual wall at any position on the map, and drawing the virtual wall at the corresponding position on the map;
in response to a marking operation for any area on the map, an area mark is added for the area on the map.
5. The method of claim 1, further comprising:
acquiring mapping data of the target space from the cloud server; the mapping data are collected from the target space by the robot and uploaded to the cloud server;
displaying the mapping data on the same screen with the real-time video data of the target space so as to control the robot to move in the target space according to the comparison result of the real-time video data and the mapping data.
6. The method of any one of claims 1-5, further comprising:
receiving device monitoring data sent by the robot through the cloud server; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot;
and when the equipment monitoring data indicate that the robot needs operation and maintenance processing, outputting an operation and maintenance prompt message.
7. A drawing establishing method of a cloud robot is suitable for the robot and is characterized by comprising the following steps:
the method comprises the steps that collected real-time video data of a target space are sent to terminal equipment in real time through a cloud server;
receiving a control instruction sent by the terminal equipment through the cloud server; the control instruction is sent according to the real-time video data;
acquiring mapping data in the target space according to the control instruction;
and sending the collected mapping data to the cloud server so that the cloud server creates a map of the target space according to the mapping data.
8. The method of claim 7, wherein the mapping data comprises: at least one of pose data, ranging data, mileage data, laser point cloud data, collision data, image data, and fall detection data of the robot.
9. The method of claim 7, further comprising:
acquiring own equipment monitoring data; the device monitoring data includes: at least one of battery data, network status data, and abnormal event data of the robot;
and sending the equipment monitoring data to the terminal equipment through the cloud server so as to carry out operation and maintenance processing on the robot according to the equipment monitoring data.
10. The method according to any one of claims 7-9, further comprising:
in the process of collecting mapping data in the target space, the motion track of the robot in the target space is sent to the cloud server, so that the cloud server generates the motion track required by the robot to execute a task in the target space according to the motion track.
11. The utility model provides a picture system is built to high in clouds robot which characterized in that includes:
the system comprises a robot, a cloud server and terminal equipment; the terminal equipment establishes communication connection with the robot through the cloud server;
wherein, the terminal equipment is mainly used for: receiving real-time video data of a target space, which is sent by the robot in real time through a cloud server, and outputting the real-time video data; responding to robot control operation initiated according to the real-time video data, and sending a corresponding control instruction to the robot through the cloud server;
the robot is mainly used for sending the acquired real-time video data of the target space to the terminal equipment in real time through a cloud server; receiving a control instruction sent by the terminal equipment through the cloud service; acquiring mapping data in the target space according to the control instruction; and sending the collected mapping data to the cloud server so that the cloud server creates a map of the target space according to the mapping data.
12. A terminal device, comprising: a memory, a processor, a communication component, and a display component;
wherein the memory is to: storing one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the steps of the method of any one of claims 1-6.
13. A robotic device, comprising: a memory, a processor, and a communication component;
wherein the memory is to: storing one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the steps of the method of any one of claims 7-10.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 10.
CN202111602631.7A 2021-12-24 2021-12-24 Cloud robot mapping method, system, equipment and storage medium Pending CN114373148A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111602631.7A CN114373148A (en) 2021-12-24 2021-12-24 Cloud robot mapping method, system, equipment and storage medium
PCT/CN2022/106943 WO2023115927A1 (en) 2021-12-24 2022-07-21 Cloud robot mapping method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111602631.7A CN114373148A (en) 2021-12-24 2021-12-24 Cloud robot mapping method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114373148A true CN114373148A (en) 2022-04-19

Family

ID=81142333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111602631.7A Pending CN114373148A (en) 2021-12-24 2021-12-24 Cloud robot mapping method, system, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114373148A (en)
WO (1) WO2023115927A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827120A (en) * 2022-05-05 2022-07-29 深圳市大道智创科技有限公司 Remote interaction method and device for robot and computer equipment
WO2023115927A1 (en) * 2021-12-24 2023-06-29 达闼机器人股份有限公司 Cloud robot mapping method, system, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102253667A (en) * 2011-05-03 2011-11-23 湖南大学 System and method for remote monitoring of condenser cleaning robots
CN105241461A (en) * 2015-11-16 2016-01-13 曾彦平 Map creating and positioning method of robot and robot system
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
CN109725580A (en) * 2019-01-17 2019-05-07 深圳市锐曼智能装备有限公司 The long-range control method of robot
WO2021143543A1 (en) * 2020-01-15 2021-07-22 科沃斯机器人股份有限公司 Robot and method for controlling same
CN113796778A (en) * 2021-08-03 2021-12-17 上海高仙自动化科技发展有限公司 Remote operation and maintenance method, device, system, robot, chip and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7437873B2 (en) * 2018-11-19 2024-02-26 株式会社日建設計総合研究所 Data measurement system and building equipment control system
CN114373148A (en) * 2021-12-24 2022-04-19 达闼机器人有限公司 Cloud robot mapping method, system, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102253667A (en) * 2011-05-03 2011-11-23 湖南大学 System and method for remote monitoring of condenser cleaning robots
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
CN105241461A (en) * 2015-11-16 2016-01-13 曾彦平 Map creating and positioning method of robot and robot system
CN109725580A (en) * 2019-01-17 2019-05-07 深圳市锐曼智能装备有限公司 The long-range control method of robot
WO2021143543A1 (en) * 2020-01-15 2021-07-22 科沃斯机器人股份有限公司 Robot and method for controlling same
CN113796778A (en) * 2021-08-03 2021-12-17 上海高仙自动化科技发展有限公司 Remote operation and maintenance method, device, system, robot, chip and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115927A1 (en) * 2021-12-24 2023-06-29 达闼机器人股份有限公司 Cloud robot mapping method, system, device and storage medium
CN114827120A (en) * 2022-05-05 2022-07-29 深圳市大道智创科技有限公司 Remote interaction method and device for robot and computer equipment

Also Published As

Publication number Publication date
WO2023115927A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
EP3032369B1 (en) Methods for clearing garbage and devices for the same
US11789447B2 (en) Remote control of an autonomous mobile robot
US10335949B2 (en) System for operating mobile robot based on complex map information and operating method thereof
EP3605314B1 (en) Display method and apparatus
WO2023115927A1 (en) Cloud robot mapping method, system, device and storage medium
US9829873B2 (en) Method and data presenting device for assisting a remote user to provide instructions
US9628772B2 (en) Method and video communication device for transmitting video to a remote user
US11089463B2 (en) Method and device for activating near field communication card
US10192414B2 (en) System and method for overlap detection in surveillance camera network
WO2015137740A1 (en) Home network system using robot and control method thereof
CN113542014A (en) Inspection method, inspection device, equipment management platform and storage medium
KR102439337B1 (en) Multilateral participation remote collaboration system based on Augmented reality sharing and method thereof
US10896513B2 (en) Method and apparatus for surveillance using location-tracking imaging devices
CN104992088A (en) Device security protection method and apparatus
CN102650883A (en) System and method for controlling unmanned aerial vehicle
US20230324906A1 (en) Systems and methods for remote viewing of self-driving vehicles
US11258939B2 (en) System, method and apparatus for networking independent synchronized generation of a series of images
CN111343696A (en) Communication method of self-moving equipment, self-moving equipment and storage medium
CN105488965A (en) Alarm method and device
JP2021013159A (en) Information processing apparatus, telepresence robot, site control system, remote control system, information processing method, and program
US20150002395A1 (en) Method of Interaction Between a Digital Object Representing at Least One Real or Virtual Object Located in a Distant Geographic Perimeter and a Local Pointing Device
CN113359705A (en) Path planning method, formation cooperative operation method and equipment
CN114089935A (en) Screen projection processing method, device, equipment and storage medium
CN109229097B (en) Cruise control method and device
CN114326721B (en) Picture construction method and device, cloud server and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220419