WO2019014929A1 - 一种机器人操作方法和机器人 - Google Patents

一种机器人操作方法和机器人 Download PDF

Info

Publication number
WO2019014929A1
WO2019014929A1 PCT/CN2017/093891 CN2017093891W WO2019014929A1 WO 2019014929 A1 WO2019014929 A1 WO 2019014929A1 CN 2017093891 W CN2017093891 W CN 2017093891W WO 2019014929 A1 WO2019014929 A1 WO 2019014929A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
target
information
dropped
moving speed
Prior art date
Application number
PCT/CN2017/093891
Other languages
English (en)
French (fr)
Inventor
徐斌
苏红
Original Assignee
深圳市萨斯智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市萨斯智能科技有限公司 filed Critical 深圳市萨斯智能科技有限公司
Priority to PCT/CN2017/093891 priority Critical patent/WO2019014929A1/zh
Publication of WO2019014929A1 publication Critical patent/WO2019014929A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • the present invention relates to the field of robot technology, and in particular, to a robot operation method and a robot.
  • robots are being used in more and more scenes, such as using robots to carry items, install items or punch holes. And now the robot can be controlled remotely.
  • the remote control may be that the robot establishes a communication connection with the communication terminal in advance, and the user controls the robot through the communication terminal to command the robot to complete the event indicated by the user.
  • the user since the user is remotely manipulating the robot, the user is not particularly clear about the current working condition of the robot. For example, in the case of an unexpected situation of the robot working scene, the user cannot understand in time, thereby causing the robot to operate incorrectly. It can be seen that the probability of erroneous operation of the robot is relatively high.
  • the embodiment of the invention provides a robot operation method and a robot to solve the problem that the probability of the robot wrong operation is relatively high.
  • An embodiment of the present invention provides a method for operating a robot, including:
  • the robot grabs the target item and moves to the destination position of the target item
  • the robot captures video information of the current scene in real time through the camera, and transmits the video information to the pre-bound communication terminal in real time;
  • the robot When the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to the target speed, and raises the shooting frame rate of the imaging device to the target frame rate, using the The target frame rate is captured, and the captured video information is transmitted to the communication terminal in real time.
  • the method further includes:
  • the robot identifies tag information of the target item and acquires tag content in the tag information
  • the robot sets the grasping station according to the correspondence between the pre-acquired label content and the gripping arm force
  • the gripping arm force of the target item is the gripping arm force corresponding to the content of the label.
  • the method further includes:
  • the robot identifies tag information of the target item and acquires tag content in the tag information
  • the robot measures humidity information of a current environment
  • the robot sets the moving speed of the robot to a moving speed corresponding to the label content and corresponding to the humidity information according to a correspondence relationship between the label content, the humidity information, and the moving speed acquired in advance.
  • the robot when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to the target speed, and raises the shooting frame rate of the camera to the target.
  • a frame rate after the step of shooting the target frame rate and transmitting the captured video information to the communication terminal in real time, the method further includes:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot identifies the number of the dropped object according to the image feature
  • the robot adjusts the moving speed to a moving speed corresponding to the number of falling objects according to the correspondence between the number of falling objects acquired in advance and the moving speed.
  • the robot when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to the target speed, and raises the shooting frame rate of the camera to the target.
  • a frame rate after the step of shooting the target frame rate and transmitting the captured video information to the communication terminal in real time, the method further includes:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot identifies the number of the dropped object according to the image feature, and determines whether the number of the dropped object is greater than a preset threshold
  • the robot stops moving forward and returns backward at a preset speed.
  • An embodiment of the present invention further provides a robot, including:
  • a grabbing module for grabbing the target item and moving to a destination position of the target item
  • a transmission module configured to capture video information of a current scene in real time by using an imaging device, and transmit the video information to a pre-bound communication terminal in real time;
  • An adjustment module configured to reduce a moving speed to a target speed when the robot moves to a distance from the target position within a preset distance, and increase a shooting frame rate of the camera to a target a frame rate, using the target frame rate for shooting, and transmitting the captured video information to the communication terminal in real time.
  • the robot further comprises:
  • a first identification module configured to identify tag information of the target item, and obtain tag content in the tag information
  • the first setting module is configured to set, according to the correspondence between the pre-acquired label content and the grasping arm force, a gripping arm force for grasping the target item as a grabbing arm force corresponding to the label content.
  • the robot further comprises:
  • a second identification module configured to identify tag information of the target item, and obtain tag content in the tag information
  • a measuring module for measuring humidity information of the current environment
  • the second setting module is configured to set the moving speed of the robot to be a moving speed corresponding to the label content and corresponding to the humidity information according to a correspondence relationship between the label content, the humidity information, and the moving speed acquired in advance.
  • the robot further comprises:
  • a first judging module configured to determine, in real time, whether there is an image feature in which the object is dropped in the captured image information
  • a third identification module configured to: if there is an image feature in which the object is dropped in the captured image information, identify the number of the dropped object according to the image feature;
  • the height adjustment module is configured to increase the moving speed to a moving speed corresponding to the number of the falling objects according to the correspondence between the number of the falling objects acquired in advance and the moving speed.
  • the robot further comprises:
  • a second judging module configured to determine, in real time, whether there is an image feature in which the object is dropped in the captured image information
  • a fourth identification module configured to: if there is an image feature in which the object is dropped in the captured image information, according to The image feature identifies the number of dropped objects and determines whether the number of dropped objects is greater than a preset threshold;
  • the stop module is configured to stop moving forward if the number of dropped objects is greater than a preset threshold, and return backwards at a preset speed.
  • the robot captures the target item and moves to the target position of the target item; the robot captures the video information of the current scene in real time through the camera, and transmits the video to the pre-bound communication terminal in real time.
  • Information when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and raises a shooting frame rate of the camera to a target frame rate, Shooting is performed using the target frame rate, and the captured video information is transmitted to the communication terminal in real time. This allows the user to accurately understand the current scene of the robot to reduce erroneous operations.
  • FIG. 1 is a schematic flow chart of a method for operating a robot according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of another robot according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another robot according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another robot according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another robot according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another robot according to an embodiment of the present invention.
  • the robot may include: a chassis, a wheel, a crawler, a rechargeable battery, and a mechanical arm mounted on the chassis.
  • the robotic arm can include a boom, a telescopic arm, an arm, a mechanical jaw, and a lumbar rotation system.
  • a video system may also be included.
  • the video system may include a camera mounted in different locations, and may also include a walking system.
  • the walking system may be a 6 ⁇ 6 all-wheel drive, and the boom lift may be driven by a dual electric strut and a balance bar installed.
  • the control system may be controlled by a central system and a control box, etc.
  • the robot and the control system may be wired or wirelessly controlled and may transmit the video information of the robot end to the control hall or the command vehicle through wireless radiation, and Remotely control the robot.
  • the robot may be a robot that can be used for the robot to be grasped and transported, such as a robot or a warehouse handling robot, which is not limited in this embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a method for operating a robot according to an embodiment of the present invention. As shown in FIG. 1 , the method includes the following steps:
  • the robot grabs the target item and moves to the destination position of the target item.
  • the target location may be a pre-configured location of the user, and the destination item may be any item that the robot can grasp, such as an explosive package, a commodity, or an electronic monitoring device.
  • the robot captures video information of the current scene in real time through the camera, and transmits the video information to the pre-bound communication terminal in real time.
  • the above camera device may be a VR camera device, or may be an ordinary two-dimensional camera device or the like.
  • the robot When the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and raises a shooting frame rate of the camera device to a target frame rate. Shooting is performed using the target frame rate, and the captured video information is transmitted to the communication terminal in real time.
  • the method before the step of the robot grabbing the target item and moving to the target position of the target item, the method further includes:
  • the robot identifies tag information of the target item and acquires a tag in the tag information content
  • the robot sets the gripping arm force of the target item to be the grabbing arm force corresponding to the label content according to the correspondence between the pre-acquired label content and the gripping arm force.
  • the gripping arm force for grasping the target item it is possible to set the gripping arm force for grasping the target item to be the grabbing arm force corresponding to the label content, so that the gripping performance of the robot can be improved to avoid scratching the target item.
  • the method before the step of the robot grabbing the target item and moving to the target position of the target item, the method further includes:
  • the robot identifies tag information of the target item and acquires tag content in the tag information
  • the robot measures humidity information of a current environment
  • the robot sets the moving speed of the robot to a moving speed corresponding to the label content and corresponding to the humidity information according to a correspondence relationship between the label content, the humidity information, and the moving speed acquired in advance.
  • the corresponding moving speed can be set according to the label content and the humidity information. For example, the higher the humidity, the faster the robot moves, so as to complete the task as soon as possible to avoid the robot failing to complete the task due to the humidity drop.
  • the robot when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and shoots the camera.
  • the method further includes:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot recognizes the number of the dropped object according to the image feature
  • the robot adjusts the moving speed to a moving speed corresponding to the number of falling objects according to the correspondence between the number of falling objects acquired in advance and the moving speed.
  • the robot when the robot moves to a distance from the target position When the robot is within a preset distance, the robot reduces the moving speed to the target speed, and raises the shooting frame rate of the camera to the target frame rate, uses the target frame rate to shoot, and shoots the video.
  • the method further includes:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot identifies the number of the dropped object according to the image feature, and determines whether the number of the dropped object is greater than a preset threshold
  • the robot stops moving forward and returns backward at a preset speed.
  • the robot stops moving forward and returns backwards at a preset speed, because in this case, if the robot continues to move forward, the robot may be damaged. And target items.
  • the robot captures the target item and moves to the target position of the target item; the robot captures the video information of the current scene in real time through the camera, and transmits the video to the pre-bound communication terminal in real time.
  • Information when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and raises a shooting frame rate of the camera to a target frame rate, Shooting is performed using the target frame rate, and the captured video information is transmitted to the communication terminal in real time. This allows the user to accurately understand the current scene of the robot to reduce erroneous operations.
  • FIG. 2 is a schematic structural diagram of a robot according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
  • the grabbing module 201 is configured to grab the target item and move to the target position of the target item;
  • the transmission module 202 is configured to capture video information of a current scene in real time by using an imaging device, and transmit the video information to a pre-bound communication terminal in real time;
  • the adjusting module 203 is configured to: when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and raises a shooting frame rate of the camera to a target frame rate, which is captured using the target frame rate, and the captured video information is transmitted to the communication terminal in real time.
  • the robot further includes:
  • a first identification module 204 configured to identify tag information of the target item, and obtain the label The content of the label in the message
  • the first setting module 205 is configured to set a grab arm force for grasping the target item to a grab arm force corresponding to the label content according to a correspondence relationship between the pre-acquired label content and the grab arm force.
  • the robot further includes:
  • a second identification module 206 configured to identify tag information of the target item, and obtain tag content in the tag information
  • a measuring module 207 configured to measure humidity information of a current environment
  • the second setting module 208 is configured to set the moving speed of the robot to be a moving speed corresponding to the label content and corresponding to the humidity information according to a correspondence relationship between the label content, the humidity information, and the moving speed acquired in advance.
  • the robot further includes:
  • the first determining module 209 is configured to determine, in real time, whether there is an image feature in which the object is dropped in the captured image information;
  • a third identification module 2010, configured to: if there is an image feature in which the object is dropped in the captured image information, identify the number of the dropped object according to the image feature;
  • the height adjustment module 2011 is configured to increase the moving speed to a moving speed corresponding to the number of the falling objects according to the correspondence between the number of falling objects acquired in advance and the moving speed.
  • the robot further includes:
  • the second judging module 2012 is configured to determine, in real time, whether there is an image feature in which the object is dropped in the captured image information
  • the fourth identification module 2013 is configured to: if there is an image feature in which the object is dropped in the captured image information, identify the number of the dropped object according to the image feature, and determine whether the number of the dropped object is greater than a preset threshold;
  • the stopping module 2014 is configured to stop moving forward if the number of falling objects is greater than a preset threshold, and return backwards at a preset speed.
  • the robot captures the target item and moves to the target position of the target item; the robot captures the video information of the current scene in real time through the camera, and transmits the video to the pre-bound communication terminal in real time.
  • Information when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and the camera device
  • the shooting frame rate is increased to a target frame rate, the target frame rate is used for shooting, and the captured video information is transmitted to the communication terminal in real time. This allows the user to accurately understand the current scene of the robot to reduce erroneous operations.
  • FIG. 7 is a structural diagram of another robot according to an embodiment of the present invention.
  • the robot includes a processor 701, a memory 702, a network interface 704, and a user interface 703.
  • the various components in the robot are coupled together by a bus system 705.
  • the bus system 705 includes a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are labeled as bus system 705 in FIG.
  • the user interface 703 may include a display, a keyboard, or a pointing device (eg, a mouse, a track ball, a touch pad, or a touch screen, etc.).
  • a pointing device eg, a mouse, a track ball, a touch pad, or a touch screen, etc.
  • the memory 702 in the embodiments of the present invention may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (Erasable PROM, EPROM), or an electric Erase programmable read only memory (EEPROM) or flash memory.
  • the volatile memory can be a Random Access Memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (Synchronous DRAM).
  • Memory 702 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
  • memory 702 stores elements, executable modules or data structures, or a subset thereof, or their extended set: operating system 7021 and application 7022.
  • the operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks.
  • the application 7022 includes various applications, such as a media player (Media Player), a browser (Browser), etc., for Achieve a variety of application services.
  • a program implementing the method of the embodiment of the present invention may be included in the application 7022.
  • the processor 701 by calling the program or instruction stored in the memory 702, specifically, the program or the instruction stored in the application 7022, the processor 701 is configured to:
  • the robot grabs the target item and moves to the destination position of the target item
  • the robot captures video information of the current scene in real time through the camera, and transmits the video information to the pre-bound communication terminal in real time;
  • the robot When the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to the target speed, and raises the shooting frame rate of the imaging device to the target frame rate, using the The target frame rate is captured, and the captured video information is transmitted to the communication terminal in real time.
  • Processor 701 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 701 or an instruction in a form of software.
  • the processor 701 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and completes the steps of the above method in combination with its hardware.
  • the embodiments described herein can be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic Device (PLD), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA), general purpose processor, controller, microcontroller, microprocessor, other electronic unit for performing the functions described herein, or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller
  • microprocessor other electronic unit for performing the functions described herein, or a combination thereof.
  • the techniques described herein can be implemented by modules (eg, procedures, functions, and so on) that perform the functions described herein.
  • the software code can be stored in memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.
  • the processor 701 is further configured to:
  • the robot identifies tag information of the target item and acquires tag content in the tag information
  • the robot sets the gripping arm force of the target item to be the grabbing arm force corresponding to the label content according to the correspondence between the pre-acquired label content and the gripping arm force.
  • the processor 701 is further configured to:
  • the robot identifies tag information of the target item and acquires tag content in the tag information
  • the robot measures humidity information of a current environment
  • the robot sets the moving speed of the robot to a moving speed corresponding to the label content and corresponding to the humidity information according to a correspondence relationship between the label content, the humidity information, and the moving speed acquired in advance.
  • the robot when the robot moves to a distance from the target position within a preset distance, the robot reduces the moving speed to a target speed, and shoots the camera.
  • the processor 701 is further configured to: adjust the frame rate to the target frame rate, use the target frame rate to capture, and transmit the captured video information to the communication terminal in real time, the processor 701 is further configured to:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot recognizes the number of the dropped object according to the image feature
  • the robot adjusts the moving speed to a moving speed corresponding to the number of falling objects according to the correspondence between the number of falling objects acquired in advance and the moving speed.
  • the processor 701 is further configured to:
  • the robot determines in real time whether there is an image feature in which the object is dropped in the captured image information
  • the robot identifies the number of the dropped object according to the image feature, and determines whether the number of the dropped object is greater than a preset threshold
  • the robot stops moving forward and returns backward at a preset speed.
  • the robot may be a robot in any of the embodiments of the method in the embodiment of the present invention, and any implementation manner of the robot in the method embodiment in the embodiment of the present invention may be used in the embodiment.
  • the above-mentioned robots are implemented, and the same beneficial effects are achieved, and will not be described again here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, may be located in one place, or It can also be distributed to multiple network elements. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

一种机器人操作方法和机器人,其中机器人操作方法包括:机器人抓取目标物品,并向目标物品的目的位置移动;机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输该视频信息;当机器人移动至离目的位置的距离在预设距离内时,机器人将移动速度降低至目标速度,并将摄像装置的拍摄帧率调高至目标帧率,使用目标帧率进行拍摄,并将拍摄的视频信息实时传输给通信终端。该操作方法可以让用户准确地了解机器人当前场景的情况,以减少错误操作。

Description

一种机器人操作方法和机器人 技术领域
本发明涉及机器人技术领域,尤其涉及一种机器人操作方法和机器人。
背景技术
随着机器人技术和通信技术的发展,目前越来越多场景中会使用到机器人,例如:使用机器人搬运物品、安装物品或者打孔等等。且现在机器人均可以远程控制。其中,远程控制可以是机器人预先与通信终端建立通信连接,用户通过通信终端操控机器人,以命令机器人完成用户指示的事件。但在实际应用过程中,由于用户均是远程操控机器人,这样用户对机器人当前工作的情况并不是特定清楚,例如:机器人工作场景出现突发情况下,用户无法及时了解,从而导致机器人错误操作。可见,目前机器人错误操作的概率比较高。
发明内容
本发明实施例提供一种机器人操作方法和机器人,以解决机器人错误操作的概率比较高的问题。
本发明实施例提供一种机器人操作方法,包括:
机器人抓取目标物品,并向所述目标物品的目的位置移动;
所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
优选的,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
所述机器人根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所 述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
优选的,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
所述机器人测量当前环境的湿度信息;
所述机器人根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
优选的,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量;
所述机器人根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
优选的,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
若掉落物体的数量大于预设阈值,则所述机器人停止向前移动,并以预设速度向后返回。
本发明实施例还提供一种机器人,包括:
抓取模块,用于抓取目标物品,并向所述目标物品的目的位置移动;
传输模块,用于通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
调整模块,用于当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
优选的,所述机器人还包括:
第一识别模块,用于识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
第一设置模块,用于根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
优选的,所述机器人还包括:
第二识别模块,用于识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
测量模块,用于测量当前环境的湿度信息;
第二设置模块,用于根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
优选的,所述机器人还包括:
第一判断模块,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
第三识别模块,用于若拍摄的图像信息中存在物体掉落的图像特征,根据所述图像特征识别掉落物体的数量;
调高模块,用于根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
优选的,所述机器人还包括:
第二判断模块,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
第四识别模块,用于若拍摄的图像信息中存在物体掉落的图像特征,根据 所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
停止模块,用于若掉落物体的数量大于预设阈值,则停止向前移动,并以预设速度向后返回。
本发明实施例中,机器人抓取目标物品,并向所述目标物品的目的位置移动;所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。这样可以让用户准确地了解机器人当前场景的情况,以减少错误操作。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种机器人操作方法的流程示意图;
图2是本发明实施例提供的一种机器人的结构示意图;
图3是本发明实施例提供的另一种机器人的结构示意图;
图4是本发明实施例提供的另一种机器人的结构示意图;
图5是本发明实施例提供的另一种机器人的结构示意图;
图6是本发明实施例提供的另一种机器人的结构示意图;
图7是本发明实施例提供的另一种机器人的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的机器人可以包括:底盘、车轮、履带、充电电池及安装在底盘上的机械手臂组成。机械手臂可以包括大臂、伸缩臂、小臂、机械爪及腰转***。以及还可以包括视频***,视频***可以包括安装在不同位置的摄像机组成,还可以包括行走***,行走***可以是6×6全轮驱动,大臂升降可以由双电动撑杆驱动并安装平衡杠,另外,还可以包括控制***由中央***控制和操控箱等等,另外,机器人与控制***之间可有线或无线操控并可以将机器人端的视频信息通过无线放射传送到控制大厅或指挥车,以及远程操控机器人。另外,本发明实施例中,机器人可以排爆机器人或者仓库搬运机器人等可以抓取和搬运物体的机器人,对此本发明实施例不作限定。
请参见图1,图1是本发明实施例提供的一种机器人操作方法的流程示意图,如图1所示,包括以下步骤:
101、机器人抓取目标物品,并向所述目标物品的目的位置移动。
其中,上述目标位置可以是用户预先配置好的位置,而上述目的物品可以是机器人能够抓取的任何物品,例如:***包、商品或者电子监测设备等等,对此本发明实施例不作限定。
102、机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息。
其中,上述摄像装置可以是VR摄像装置,或者可以是普通的二维摄像装置等等。
103、当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
通过上述步骤可以实现在离目的位置距离比较近时,降低机器人移动速度,以及提高拍摄状态的拍摄帧率,从而可以让用户更加清楚地看清楚机器人周期的环境,以准确操控机器人。
作为一种可选的实施方式,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签 内容;
所述机器人根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
该实施方式中,可以实现设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力,从而可以提高机器人的抓取性能,以避免抓坏目标物品。
作为一种可选的实施方式,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
所述机器人测量当前环境的湿度信息;
所述机器人根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
该实施方式中,可以实现根据标签内容和湿度信息设置对应的移动速度,例如,湿度越大机器人移动速度越快,以实现尽快完成任务,以避免因湿度下降而导致机器人无法完成任务。
作为一种可选的实施方式,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量;
所述机器人根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
该实施方式中,可以实现根据掉落物体的数量设置对应的移动速度,例如,掉落物体的数量越多机器人移动速度越快,以实现尽快完成任务,以避免掉落物体的过多而导致机器人无法完成任务。
作为一种可选的实施方式,所述当所述机器人移动至离所述目标位置的距 离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
若掉落物体的数量大于预设阈值,则所述机器人停止向前移动,并以预设速度向后返回。
该实施方式中,可以实现若掉落物体的数量大于预设阈值,则所述机器人停止向前移动,并以预设速度向后返回,因为该情况下如果机器人再继续往前则可能损坏机器人和目标物品。
本发明实施例中,机器人抓取目标物品,并向所述目标物品的目的位置移动;所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。这样可以让用户准确地了解机器人当前场景的情况,以减少错误操作。
请参见图2,图2是本发明实施例提供的一种机器人的结构示意图,如图2所示,包括:
抓取模块201,用于抓取目标物品,并向所述目标物品的目的位置移动;
传输模块202,用于通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
调整模块203,用于当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
作为一种可选的实施方式,如图3所示,所述机器人还包括:
第一识别模块204,用于识别所述目标物品的标签信息,并获取所述标签 信息中的标签内容;
第一设置模块205,用于根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
作为一种可选的实施方式,如图4所示,所述机器人还包括:
第二识别模块206,用于识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
测量模块207,用于测量当前环境的湿度信息;
第二设置模块208,用于根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
作为一种可选的实施方式,如图5所示,所述机器人还包括:
第一判断模块209,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
第三识别模块2010,用于若拍摄的图像信息中存在物体掉落的图像特征,根据所述图像特征识别掉落物体的数量;
调高模块2011,用于根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
作为一种可选的实施方式,如图6所示,所述机器人还包括:
第二判断模块2012,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
第四识别模块2013,用于若拍摄的图像信息中存在物体掉落的图像特征,根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
停止模块2014,用于若掉落物体的数量大于预设阈值,则停止向前移动,并以预设速度向后返回。
本发明实施例中,机器人抓取目标物品,并向所述目标物品的目的位置移动;所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置 的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。这样可以让用户准确地了解机器人当前场景的情况,以减少错误操作。
参见图7,图7是本发明实施例提供的另一种机器人的结构图。如图7所示,机器人包括:处理器701、存储器702、网络接口704和用户接口703。机器人中的各个组件通过总线***705耦合在一起。总线***705除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图7中将各种总线都标为总线***705。
其中,用户接口703可以包括显示器、键盘或者点击设备(例如,鼠标,轨迹球(track ball)、触感板或者触摸屏等。
可以理解,本发明实施例中的存储器702可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的***和方法的存储器702旨在包括但不限于这些和任意其它适合类型的存储器。
在一些实施方式中,存储器702存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:操作***7021和应用程序7022。
其中,操作***7021,包含各种***程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序7022,包含各种应用程序,例如媒体播放器(Media Player)、浏览器(Browser)等,用于 实现各种应用业务。实现本发明实施例方法的程序可以包含在应用程序7022中。
在本发明实施例中,通过调用存储器702存储的程序或指令,具体的,可以是应用程序7022中存储的程序或指令,处理器701用于:
机器人抓取目标物品,并向所述目标物品的目的位置移动;
所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
上述本发明实施例揭示的方法可以应用于处理器701中,或者由处理器701实现。处理器701可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器701中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器701可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器702,处理器701读取存储器702中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
作为一种可选的实施方式,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,处理器701还用于:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
所述机器人根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
作为一种可选的实施方式,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,处理器701还用于:
所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
所述机器人测量当前环境的湿度信息;
所述机器人根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
作为一种可选的实施方式,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,处理器701还用于:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量;
所述机器人根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
作为一种可选的实施方式,所述当所述机器人移动至离所述目标位置的距 离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,处理器701还用于:
所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
若掉落物体的数量大于预设阈值,则所述机器人停止向前移动,并以预设速度向后返回。
需要说明的是,本实施例中上述机器人可以是本发明实施例中方法实施例中任意实施方式的机器人,本发明实施例中方法实施例中机器人的任意实施方式都可以被本实施例中的上述机器人所实现,以及达到相同的有益效果,此处不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者 也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (10)

  1. 一种机器人操作方法,其特征在于,包括:
    机器人抓取目标物品,并向所述目标物品的目的位置移动;
    所述机器人通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
    当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
  2. 如权利要求1所述的方法,其特征在于,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
    所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
    所述机器人根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
  3. 如权利要求1所述的方法,其特征在于,所述机器人抓取目标物品,并向所述目标物品的目的位置移动的步骤之前,所述方法还包括:
    所述机器人识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
    所述机器人测量当前环境的湿度信息;
    所述机器人根据预先获取的标签内容、湿度信息和移动速度的对应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
  4. 如权利要求1所述的方法,其特征在于,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
    所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
    若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量;
    所述机器人根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
  5. 如权利要求1所述的方法,其特征在于,所述当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端的步骤之后,所述方法还包括:
    所述机器人实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
    若拍摄的图像信息中存在物体掉落的图像特征,所述机器人根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
    若掉落物体的数量大于预设阈值,则所述机器人停止向前移动,并以预设速度向后返回。
  6. 一种机器人,其特征在于,包括:
    抓取模块,用于抓取目标物品,并向所述目标物品的目的位置移动;
    传输模块,用于通过摄像装置实时拍摄当前场景的视频信息,并实时向预先绑定的通信终端传输所述视频信息;
    调整模块,用于当所述机器人移动至离所述目标位置的距离在预设距离内时,所述机器人将移动速度降低至目标速度,并将所述摄像装置的拍摄帧率调高至目标帧率,使用所述目标帧率进行拍摄,并将拍摄的视频信息实时传输给所述通信终端。
  7. 如权利要求6所述的机器人,其特征在于,所述机器人还包括:
    第一识别模块,用于识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
    第一设置模块,用于根据预先获取的标签内容与抓取臂力的对应关系,设置抓取所述目标物品的抓取臂力为与所述标签内容对应的抓取臂力。
  8. 如权利要求6所述的机器人,其特征在于,所述机器人还包括:
    第二识别模块,用于识别所述目标物品的标签信息,并获取所述标签信息中的标签内容;
    测量模块,用于测量当前环境的湿度信息;
    第二设置模块,用于根据预先获取的标签内容、湿度信息和移动速度的对 应关系,设置所述机器人移动速度为与所述标签内容对应的,且与所述湿度信息对应的移动速度。
  9. 如权利要求6所述的机器人,其特征在于,所述机器人还包括:
    第一判断模块,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
    第三识别模块,用于若拍摄的图像信息中存在物体掉落的图像特征,根据所述图像特征识别掉落物体的数量;
    调高模块,用于根据预先获取的掉落物体的数量与移动速度的对应关系,将移动速度调高至与所述掉落物体的数量对应的移动速度。
  10. 如权利要求6所述的机器人,其特征在于,所述机器人还包括:
    第二判断模块,用于实时判断拍摄的图像信息中是否存在物体掉落的图像特征;
    第四识别模块,用于若拍摄的图像信息中存在物体掉落的图像特征,根据所述图像特征识别掉落物体的数量,并判断掉落物体的数量是否大于预设阈值;
    停止模块,用于若掉落物体的数量大于预设阈值,则停止向前移动,并以预设速度向后返回。
PCT/CN2017/093891 2017-07-21 2017-07-21 一种机器人操作方法和机器人 WO2019014929A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/093891 WO2019014929A1 (zh) 2017-07-21 2017-07-21 一种机器人操作方法和机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/093891 WO2019014929A1 (zh) 2017-07-21 2017-07-21 一种机器人操作方法和机器人

Publications (1)

Publication Number Publication Date
WO2019014929A1 true WO2019014929A1 (zh) 2019-01-24

Family

ID=65014977

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093891 WO2019014929A1 (zh) 2017-07-21 2017-07-21 一种机器人操作方法和机器人

Country Status (1)

Country Link
WO (1) WO2019014929A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519681A (zh) * 2019-07-18 2019-11-29 北京无线体育俱乐部有限公司 活动项目识别***、方法、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103522305A (zh) * 2013-10-29 2014-01-22 中国科学院自动化研究所 一种使移动机械臂趋近并抓取目标物体的方法
US20150261218A1 (en) * 2013-03-15 2015-09-17 Hitachi, Ltd. Remote operation system
CN105049804A (zh) * 2015-07-14 2015-11-11 广州广日电气设备有限公司 排爆机器人远程操作***
CN105450905A (zh) * 2014-09-22 2016-03-30 卡西欧计算机株式会社 摄像装置、信息输送装置、摄像控制方法及信息输送方法
CN105611053A (zh) * 2015-12-23 2016-05-25 北京工业大学 基于智能手机的移动机器人控制***
CN106393049A (zh) * 2015-08-12 2017-02-15 哈尔滨理工大学 一种用于高危作业的机器人
CN106927079A (zh) * 2017-03-21 2017-07-07 长春理工大学 一种基于机器视觉的工业***抓取和装箱***及方法
CN107263480A (zh) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 一种机器人操作方法和机器人

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150261218A1 (en) * 2013-03-15 2015-09-17 Hitachi, Ltd. Remote operation system
CN103522305A (zh) * 2013-10-29 2014-01-22 中国科学院自动化研究所 一种使移动机械臂趋近并抓取目标物体的方法
CN105450905A (zh) * 2014-09-22 2016-03-30 卡西欧计算机株式会社 摄像装置、信息输送装置、摄像控制方法及信息输送方法
CN105049804A (zh) * 2015-07-14 2015-11-11 广州广日电气设备有限公司 排爆机器人远程操作***
CN106393049A (zh) * 2015-08-12 2017-02-15 哈尔滨理工大学 一种用于高危作业的机器人
CN105611053A (zh) * 2015-12-23 2016-05-25 北京工业大学 基于智能手机的移动机器人控制***
CN106927079A (zh) * 2017-03-21 2017-07-07 长春理工大学 一种基于机器视觉的工业***抓取和装箱***及方法
CN107263480A (zh) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 一种机器人操作方法和机器人

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519681A (zh) * 2019-07-18 2019-11-29 北京无线体育俱乐部有限公司 活动项目识别***、方法、设备及存储介质

Similar Documents

Publication Publication Date Title
CN107263480A (zh) 一种机器人操作方法和机器人
JP6782046B1 (ja) 画像データに基づく物体検出システム及び方法
US10632616B1 (en) Smart robot part
TW202105326A (zh) 目標跟蹤方法、智慧移動設備和儲存介質
US11833690B2 (en) Robotic system with dynamic motion adjustment mechanism and methods of operating same
CN107241438A (zh) 一种机器人的信息传输方法和机器人
US20230007157A1 (en) Systems and Methods for Sampling Images
JP2014188617A (ja) ロボット制御システム、ロボット、ロボット制御方法及びプログラム
US10349035B2 (en) Automatically scanning and representing an environment having a plurality of features
WO2019014929A1 (zh) 一种机器人操作方法和机器人
WO2021085429A1 (ja) 被遠隔操作装置、遠隔操作システム、遠隔操作装置
JP2021016939A (ja) グリッパ装置のツールを変更するシステム
WO2019014951A1 (zh) 一种机器人的信息传输方法和机器人
US20230330858A1 (en) Fine-grained industrial robotic assemblies
WO2019018958A1 (zh) 一种机器人对远程指令的处理方法和机器人
WO2019018961A1 (zh) 一种机器人检测物体的方法和机器人
WO2019018963A1 (zh) 一种机器人移动速度控制方法和机器人
WO2019014950A1 (zh) 一种机器人的控制方法和机器人
WO2019014919A1 (zh) 一种机器人搬运方法和机器人
WO2019014945A1 (zh) 一种机器人的管理方法和机器人
WO2019018962A1 (zh) 一种机器人验证方法和机器人
JP2022148261A (ja) 物品回収システム、物品回収ロボット、物品回収方法、及び物品回収プログラム
WO2019014940A1 (zh) 一种机器人的安全确定方法和机器人
WO2019014952A1 (zh) 一种机器人对待抓取的目标物品的处理方法和机器人
CN114952769A (zh) 用于监控机柜状态的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918345

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17918345

Country of ref document: EP

Kind code of ref document: A1