CN114571450A - Robot control method, device and storage medium - Google Patents

Robot control method, device and storage medium Download PDF

Info

Publication number
CN114571450A
CN114571450A CN202210167851.XA CN202210167851A CN114571450A CN 114571450 A CN114571450 A CN 114571450A CN 202210167851 A CN202210167851 A CN 202210167851A CN 114571450 A CN114571450 A CN 114571450A
Authority
CN
China
Prior art keywords
robot
preset
scene type
visual image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210167851.XA
Other languages
Chinese (zh)
Inventor
祝丰年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202210167851.XA priority Critical patent/CN114571450A/en
Publication of CN114571450A publication Critical patent/CN114571450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The present disclosure relates to a robot control method, apparatus, device, and storage medium, the method comprising: acquiring an environment vision image acquired by the robot; analyzing the scene type represented by the environment visual image; determining a target working mode corresponding to the scene type from a plurality of preset working modes, wherein different working modes are used for controlling a detection device of the robot according to different output power and/or detection frequency; and controlling the detection device according to the target working mode.

Description

Robot control method, device and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a robot control method, apparatus, device, and storage medium.
Background
With the vigorous development of the robot technology, robots with different functions are in endless, and various robots are more and more widely applied in daily life of people, so that great convenience is brought to the work and life of people.
In the related art, a robot may generate positioning and scene map information for its own position posture (i.e., robot scan) through various sensor data collected, and may plan a path to a certain target position based on the sensor data (i.e., robot navigation). During the process of robot scanning or navigation, the working requirement of the robot is usually maintained by transmitting signals at rated power, and the working mode has waste of power consumption.
Disclosure of Invention
An object of the present disclosure is to provide a robot control method, apparatus, device, and storage medium to solve the problems in the related art.
In order to achieve the above object, a first aspect of embodiments of the present disclosure provides a robot control method, including:
acquiring an environment vision image acquired by the robot;
analyzing the scene type represented by the environment visual image;
determining a target working mode corresponding to the scene type from a plurality of preset working modes, wherein different working modes are used for controlling a detection device of the robot according to different output power and/or detection frequency;
and controlling the detection device according to the target working mode.
Optionally, the analyzing the scene type characterized by the environment visual image includes:
and determining the scene type according to the number of the objects in the environment visual image and/or whether a target obstacle exists on a preset path of the robot.
Optionally, the determining the scene type according to the number of objects in the environment visual image includes:
determining that the environment visual image represents a first preset scene type under the condition that the number of the objects is greater than or equal to a preset number;
and under the condition that the number of the objects is smaller than the preset number, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is larger than that of the second preset scene type.
Optionally, the determining the scene type according to whether a target obstacle exists on a preset path of the robot includes:
determining that the environment visual image represents a first preset scene type under the condition that a target obstacle exists on a preset path of the robot;
and under the condition that the target obstacle does not exist on the preset path of the robot, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is greater than that of the second preset scene type.
Optionally, the method further comprises:
determining a first working mode as the target working mode under the condition that the environment visual image is determined to represent a first preset scene type;
accordingly, in case it is determined that the ambient visual image represents a second preset scene type, a second operation mode is determined as the target operation mode, wherein the output power in the first operation mode is larger than the output power in the second operation mode.
Optionally, the acquiring an environmental visual image collected by the robot includes:
acquiring the environment vision image according to a preset period;
the method further comprises the following steps:
reducing the preset period under the condition that the environment visual image is determined to represent the first preset scene type;
increasing the preset period if it is determined that the environmental visual image represents the second preset scene type.
Optionally, the method further comprises:
determining that the target obstacle exists on a preset path of the robot under the conditions that the obstacle exists on the preset path of the robot, the obstacle is a static obstacle, and the size of the obstacle is larger than a preset threshold value; or,
and determining that the target obstacle exists on a preset path of the robot under the condition that the obstacle is a dynamic obstacle.
Optionally, the magnitude of the output power in the first operation mode is positively correlated to the magnitude of the number of the objects.
Optionally, the detection means comprises a lidar.
A second aspect of the embodiments of the present disclosure provides a robot control apparatus, the apparatus including:
the acquisition module is used for acquiring an environment visual image acquired by the robot;
the analysis module is used for analyzing the scene type represented by the environment visual image;
the determining module is used for determining a target working mode corresponding to the scene type from a plurality of preset working modes, and different working modes are used for controlling the detecting device of the robot according to different output powers and/or detecting frequencies;
and the control module is used for controlling the detection device according to the target working mode.
A third aspect of the embodiments of the present disclosure provides a robot control device including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method in the first aspect.
A fourth aspect of embodiments of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method of the first aspect
By the technical scheme, the scene type represented by the environment visual image acquired by the robot can be analyzed, and the target working mode corresponding to the scene type is determined from a plurality of preset working modes. Since different operation modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies, the robot can be controlled to operate according to the output powers and/or detection frequencies corresponding to the scene types for different scene types. Therefore, the robot can automatically adjust different working modes according to different scene types, so that the power consumption waste of the robot when the robot works in different scenes due to the fixed working modes is avoided, and the energy waste is avoided. That is, with this method of the present disclosure, power consumption waste while the robot is working can be reduced.
In addition, in the related art, the robot is controlled to work by adopting a fixed working mode, which not only causes waste of power consumption, but also causes use loss of the sensor while the power consumption is wasted. By adopting the method disclosed by the invention, the service life of the sensor can be prolonged while the working power consumption of the robot is reduced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a block diagram illustrating a robot control device according to an exemplary embodiment of the present disclosure.
Fig. 3 is a block diagram illustrating another robot control device according to an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of the embodiments of the disclosure refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In the related art, when a robot scans and navigates, a detection device (i.e. a detection-type sensor, such as a laser radar) can transmit a signal, and the time consumption and direction of a target object (i.e. an object receiving the transmitted signal) to reflect the signal are determined to determine the position and other information of the target object. In this process, the detection device is normally maintained in a fixed mode of operation (e.g., transmitting a signal at a nominal power). However, when facing simpler scenes, the robot can work normally without maintaining the rated power. Thus, the robot is wasted in power consumption when operating in a simple scene.
In view of this, the embodiments of the present disclosure provide a robot control method, which may be applied to a robot control device that may be used to control a robot during scanning and navigation of the robot to reduce power consumption during operation of the robot.
The following provides a detailed description of embodiments of the present disclosure.
Fig. 1 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the robot control method includes the steps of:
and S101, acquiring an environment visual image acquired by the robot.
It should be noted that, the robot is usually configured with a camera device (e.g., a camera), and an environmental visual image captured by the robot can be acquired by the camera device disposed on the robot. The environment vision image may be one or more images representing the environment around the robot within a detectable range of a camera of the robot.
And S102, analyzing the scene type represented by the environment visual image.
It is understood that the acquired environment visual image may be analyzed by an image processing module disposed in the robot, and the type of the scene characterized by the environment visual image may be determined according to the analysis result. The number of the objects in the environment visual image can be determined through an image recognition technology, and the size of the obstacle in the preset path of the robot and the moving and static states of the obstacle are determined. The preset path of the robot may be a path which is planned by the robot for a certain target position and leads to the target position, and the path may be continuously adjusted according to sensor data acquired by the robot in the process of scanning or navigating. On the basis, the type of the scene represented by the environment visual image can be determined according to one or more of the determined number of the objects, the existence or nonexistence of the obstacles, the size of the obstacles and the dynamic and static states of the obstacles. The scene type may be preset, and the complexity of different scene types is different.
S103, determining a target working mode corresponding to the scene type from a plurality of preset working modes, wherein different working modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies.
It should be noted that a plurality of working modes corresponding to different scene types may be preset, so that the robot may automatically adjust the working modes according to the scene types. Because different working modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies, the robot can be controlled to work according to the output powers and/or detection frequencies corresponding to different scene types, and therefore power consumption waste of the robot caused by the fact that the adopted working modes do not adapt to the scenes is reduced.
Wherein, the output power may refer to the output power of the detecting device, and the detecting frequency may refer to the frequency of the signal transmitted by the detecting device.
And S104, controlling the detection device according to the target working mode.
In the embodiment of the disclosure, after the scene type is determined according to the environment visual image, the detection device can be controlled according to the target working mode corresponding to the scene type, so that the robot can work in the working mode suitable for the scene type, and the power consumption waste of the robot during working is reduced.
By the technical scheme, the scene type represented by the environment visual image acquired by the robot can be analyzed, and the target working mode corresponding to the scene type is determined from a plurality of preset working modes. Since different operation modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies, the robot can be controlled to operate according to the output powers and/or detection frequencies corresponding to the scene types for different scene types. Therefore, the robot can automatically adjust different working modes according to different scene types, so that the power consumption waste of the robot when the robot works in different scenes due to the fixed working modes is avoided, and the energy waste is avoided. That is, with this method of the present disclosure, power consumption waste while the robot is working can be reduced.
In addition, in the related art, the robot is controlled to work by adopting a fixed working mode, which not only causes waste of power consumption, but also causes use loss of the sensor while the power consumption is wasted. By adopting the method disclosed by the invention, the service life of the sensor can be prolonged while the working power consumption of the robot is reduced.
Optionally, analyzing the type of scene characterized by the environmental visual image may include:
and determining the scene type according to the number of the objects in the environment visual image and/or whether a target obstacle exists on a preset path of the robot.
Determining the scene type according to the number of the objects in the environment visual image may include:
determining that the environment visual image represents a first preset scene type under the condition that the number of the objects is greater than or equal to the preset number;
and determining the second preset scene type represented by the environment visual image under the condition that the number of the objects is smaller than the preset number.
The complexity degree of the first preset scene type is larger than that of the second preset scene type.
It should be noted that the number of objects in the environment visual image can be determined by an image recognition technique. It can be understood that, in the case that the number of the objects in the environment visual image is greater than or equal to the preset number (the preset number may be 20, which is not specifically limited by this disclosure), it indicates that there are many objects around the robot and the scene is complicated. In this case, it may be determined that the ambient visual image characterizes a first preset scene type.
Correspondingly, under the condition that the number of the objects is smaller than the preset number, the number of the objects around the robot is small, and the scene is simple. In this case, it may be determined that the ambient visual image characterizes the second preset scene type.
It is understood that the first preset scene type is more complex than the second preset scene type. Generally, when the robot is in the first preset scene type, the output power and the detection frequency required for maintaining normal operation are greater than those required when the robot is in the second preset scene type, which helps the robot to operate efficiently in a complicated scene type. Correspondingly, when the robot is in the second preset scene type, the robot can normally work without rated power, and therefore the robot can work according to the output power lower than the rated power, so that the working power consumption of the robot can be reduced, and the service life of the sensor can be prolonged.
Optionally, determining the scene type according to whether the target obstacle exists on the preset path of the robot may include:
determining an environment visual image representation first preset scene type under the condition that a target obstacle exists on a preset path of the robot;
and under the condition that no target obstacle exists on the preset path of the robot, determining that the environment visual image represents a second preset scene type.
The complexity degree of the first preset scene type is larger than that of the second preset scene type.
It should be noted that the size of the obstacle and the moving and static states of the obstacle in the preset path of the robot may be determined by an image recognition technique. Wherein the obstacle may be an object in a preset path of the robot. For example, the size of the obstacle may be determined by an object contour identified by an image recognition technique, and whether the object is a dynamic obstacle may be determined by determining whether the position of the same object identified in successive, multiple, environmental vision images has moved.
It can be understood that if no obstacle exists in the preset path, it indicates that no obstacle affecting the passing of the robot exists on the preset path of the robot. In this case, it may be determined that the ambient visual image characterizes a second preset scene type. If an obstacle exists in the preset path, whether the obstacle is a target obstacle influencing the passing of the robot can be further judged.
For example, a risk value of the robot passing through a preset path may be evaluated according to the number of obstacles, the type of obstacles (dynamic or static), and the size of the obstacles in the current scene, and in a case that the risk value is greater than or equal to a risk threshold, it is determined that a target obstacle affecting the robot passing through exists in the current scene, and the robot cannot safely pass through the target obstacle. And under the condition that the risk value is smaller than the risk threshold value, determining that no target obstacle influencing the passing of the robot exists in the current scene, and ensuring that the robot can safely pass through the target obstacle.
Optionally, determining that a target obstacle exists on a preset path of the robot under the conditions that the obstacle exists on the preset path of the robot, the obstacle is a static obstacle, and the size of the obstacle is larger than a preset threshold; or,
and determining that the target obstacle exists on the preset path of the robot under the condition that the obstacle is a dynamic obstacle.
It should be noted that, when the obstacle is a static obstacle, if the estimated risk value is greater than or equal to the risk threshold value, it indicates that a target obstacle affecting the robot to pass through exists on the preset path of the robot. In this case, it may be determined that the ambient visual image characterizes a first preset scene type. When the obstacle is a static obstacle, if the risk value obtained by evaluation is smaller than a risk threshold value, it is indicated that no target obstacle influencing the passing of the robot exists on the preset path of the robot. In this case, it may be determined that the ambient visual image characterizes a second preset scene type.
In addition, when the obstacle is a dynamic obstacle, the robot can analyze information such as a moving track of the dynamic obstacle according to the collected sensor data, and further determine whether the robot can pass through a preset path according to the information. When it is determined that the preset path cannot be passed, the robot may re-plan the preset path. Therefore, when the obstacle is a dynamic obstacle, the scene where the robot is located is complex. In this case, the estimated risk value is generally greater than or equal to a risk threshold (i.e. a target obstacle affecting the passage of the robot exists on the preset path of the robot), and it may be determined that the ambient vision image represents the first preset scene type.
It is understood that the first preset scene type is more complex than the second preset scene type. Generally, when the robot is in the first preset scene type, the output power and the detection frequency required for maintaining normal operation are greater than those required when the robot is in the second preset scene type, so that the robot is facilitated to operate efficiently in a complex scene type, the influence of the obstacle on whether the robot can pass through the preset path is effectively analyzed, and the robot is facilitated to avoid the obstacle. Correspondingly, when the robot is in the second preset scene type, the robot can normally work without rated power, and therefore the robot can work according to the output power lower than the rated power, so that the working power consumption of the robot can be reduced, and the service life of the sensor can be prolonged.
Optionally, determining the scene type according to the number of the objects in the environment visual image and whether the target obstacle exists on the preset path of the robot may include:
determining an environment visual image to represent a first preset scene type under the condition that the number of the objects is smaller than the preset number and a target barrier exists on a preset path of the robot; or,
and determining the environment visual image to represent a second preset scene type under the conditions that the number of the objects is smaller than the preset number and no target barrier exists on the preset path of the robot.
It can be understood that, if the target obstacle exists on the preset path of the robot, even if the number of the objects is smaller than the preset number, the scene where the robot is located is still complex. In such a case, the assessed risk value is typically greater than or equal to a risk threshold, and it may be determined that the environmental visual image characterizes the first preset scene type.
Optionally, the technical solution provided by the embodiment of the present disclosure may further include:
under the condition that the environment visual image is determined to represent the first preset scene type, determining the first working mode as a target working mode;
accordingly, in case it is determined that the ambient visual image characterizes a second preset scene type, the second operation mode is determined as the target operation mode.
Wherein the output power in the first operating mode is greater than the output power in the second operating mode.
Because the complexity of the first preset scene type is greater than that of the second preset scene type, the output power of the first working mode corresponding to the first preset scene type is greater than that of the second working mode.
It should be noted that both the output power and the probing frequency of the robot when operating in a complex scenario (i.e. a first preset scenario type) may be larger than the output power and the probing frequency when operating in a simple scenario (i.e. a second preset scenario type). On this basis, the detection frequency of the preset first operation mode may be greater than the detection frequency of the second operation mode, or the output power of the first operation mode is greater than the output power of the second operation mode and the detection frequency of the first operation mode is greater than the detection frequency of the second operation mode.
Optionally, the magnitude of the output power in the first operation mode is positively correlated with the magnitude of the number of the objects.
It should be noted that, the larger the number of objects in the environment visual image, the more complex the scene in which the robot is located may be indicated. After the scene type is determined to be the first preset scene type according to the number of the objects in the environment visual image, the target working mode can be determined to be the first working mode. The output power and the detection frequency of the first working mode can be dynamically adjusted according to the number of the objects. Illustratively, the magnitudes of the output power and the detection frequency in the first operating mode are positively correlated with the magnitude of the number of objects.
Accordingly, the output power and the detection frequency of the second operation mode can be dynamically adjusted according to the number of the objects. Illustratively, the magnitudes of the output power and the detection frequency in the second operation mode are positively correlated with the magnitude of the number of the objects.
It will be appreciated that the magnitude of the output power in the first mode of operation may also be related to the static and dynamic conditions of the obstacle and the size of the obstacle. In one embodiment, the more dynamic obstacles in the scene the robot is located, the greater the output power and detection frequency of the robot. In another embodiment, the larger the size of the obstacle in the scene in which the robot is located, the greater the robot output power and detection frequency. It is understood that the larger the output power and the detection frequency of the robot, the longer the working time of the robot.
It should be noted that the output power and the detection frequency of the robot are not increased without limitation, and the increase of the output power can be stopped when the output power reaches the power threshold, and similarly, the increase of the detection frequency can be stopped when the detection frequency of the robot reaches the frequency threshold, and when either of the two conditions is established, a prompt tone can be given to prompt the user that the robot cannot work normally in the current scene.
Optionally, acquiring an environmental visual image acquired by the robot includes:
and acquiring an environment vision image according to a preset period.
On this basis, the technical solution provided by the embodiment of the present disclosure may further include:
reducing a preset period under the condition that the environment visual image represents a first preset scene type;
and under the condition that the environment visual image is determined to represent the second preset scene type, increasing the preset period.
It should be understood that the environmental visual images may be acquired at a preset period. The preset period may be inversely proportional to the complexity of the scene where the robot is located.
Illustratively, in the case where it is determined that the environmental visual image represents the first preset scene type, it indicates that the robot is in a complicated scene. Under the condition, the preset period can be shortened, so that the robot can acquire more environment visual images within a short time, the robot can fully analyze the current scene, and the target working mode can be determined in time according to the scene type.
And in the case that the environment visual image is determined to represent the second preset scene type, the robot is shown to be in a simpler scene. In this case, the preset period can be increased, thereby reducing the frequency of the robot using the image pickup device and reducing the operating power consumption.
Optionally, the detection means comprises a lidar.
It should be noted that, because the accuracy of the laser radar is high, the detection result by the laser radar is more accurate. However, the cost of the laser radar is high, and the robot is controlled to operate by adopting a fixed operation mode in the related art, which not only causes waste of power consumption, but also causes use loss of the laser radar while the power consumption is wasted. This has not been advantageous for cost control in the past. By adopting the method, the working power consumption of the robot can be reduced, the use loss of the laser radar can be effectively reduced, and the service life of the laser radar can be further prolonged.
By the technical scheme, the scene type represented by the environment visual image acquired by the robot can be analyzed, and the target working mode corresponding to the scene type is determined from a plurality of preset working modes. Since different operation modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies, the robot can be controlled to operate according to the output powers and/or detection frequencies corresponding to the scene types for different scene types. Therefore, the robot can automatically adjust different working modes according to different scene types, so that the power consumption waste of the robot when the robot works in different scenes due to the fixed working mode is avoided, and the energy waste is avoided. That is, with this method of the present disclosure, power consumption waste while the robot is working can be reduced.
In addition, in the related art, the robot is controlled to work by adopting a fixed working mode, which not only causes waste of power consumption, but also causes use loss of the sensor while the power consumption is wasted. By adopting the method disclosed by the invention, the service life of the sensor can be prolonged while the working power consumption of the robot is reduced.
Based on the same inventive concept, the disclosed embodiment further provides a robot control device 100, and referring to fig. 2, fig. 2 is a block diagram illustrating a robot control device 100 according to an exemplary embodiment of the present disclosure. The robot controller 100 includes:
an obtaining module 101, configured to obtain an environmental visual image acquired by the robot;
an analysis module 102, configured to analyze a scene type represented by the environmental visual image;
a first determining module 103, configured to determine a target working mode corresponding to the scene type from a plurality of preset working modes, where different working modes are used to control a detecting device of the robot according to different output powers and/or detection frequencies;
and a control module 104, configured to control the detection device according to the target operating mode.
By adopting the device, the scene type represented by the environment visual image acquired by the robot can be analyzed, and the target working mode corresponding to the scene type is determined from a plurality of preset working modes. Since different operation modes are used for controlling the detection device of the robot according to different output powers and/or detection frequencies, the robot can be controlled to operate according to the output powers and/or detection frequencies corresponding to the scene types for different scene types. Therefore, the robot can automatically adjust different working modes according to different scene types, so that the power consumption waste of the robot when the robot works in different scenes due to the fixed working mode is avoided, and the energy waste is avoided. That is, with this method of the present disclosure, power consumption waste while the robot is working can be reduced.
In addition, in the related art, the robot is controlled to work by adopting a fixed working mode, so that not only is power consumption wasted, but also the use loss of the sensor is caused while the power consumption is wasted. By adopting the device disclosed by the invention, the service life of the sensor can be prolonged while the working power consumption of the robot is reduced.
Optionally, the analysis module 102 is further configured to:
and determining the scene type according to the number of the objects in the environment visual image and/or whether a target obstacle exists on a preset path of the robot.
Optionally, the analysis module 102 is further configured to:
determining that the environment visual image represents a first preset scene type under the condition that the number of the objects is greater than or equal to the preset number;
and under the condition that the number of the objects is smaller than the preset number, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is larger than that of the second preset scene type.
Optionally, the analysis module 102 is further configured to:
determining an environment visual image representation first preset scene type under the condition that a target obstacle exists on a preset path of the robot;
and under the condition that no target obstacle exists on the preset path of the robot, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is greater than that of the second preset scene type.
Optionally, the robot controller 100 further comprises a second determining module for:
determining that a target obstacle exists on a preset path of the robot under the conditions that the obstacle exists on the preset path of the robot, the obstacle is a static obstacle, and the size of the obstacle is larger than a preset threshold value; or,
and determining that the target obstacle exists on the preset path of the robot under the condition that the obstacle is a dynamic obstacle.
Optionally, the robot controller 100 further comprises a third determining module configured to:
under the condition that the environment visual image is determined to represent the first preset scene type, determining the first working mode as a target working mode;
accordingly, in case it is determined that the ambient visual image represents a second preset scene type, the second operation mode is determined as the target operation mode, wherein the output power in the first operation mode is larger than the output power in the second operation mode.
Optionally, the magnitude of the output power in the first operation mode is positively correlated with the magnitude of the number of the objects.
Optionally, the obtaining module 101 is further configured to:
acquiring an environment visual image according to a preset period;
the robot controller 100 further comprises a fourth determining means for:
reducing a preset period under the condition that the environment visual image represents the first preset scene type;
and under the condition that the environment visual image is determined to represent the second preset scene type, increasing the preset period.
Optionally, the detection means comprises a lidar.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same inventive concept, the disclosed embodiment further provides a robot control device 200, and referring to fig. 3, fig. 3 is a block diagram illustrating another robot control device 200 according to an exemplary embodiment of the present disclosure. The robot controller 200 may include: a processor 201 and a memory 202. The robotic control device 200 may also include one or more of a multimedia component 203, an input/output (I/O) interface 204, and a communication component 205.
The processor 201 is configured to control the overall operation of the robot control device 200, so as to complete all or part of the steps in the robot control method. The memory 202 is used to store various types of data to support operation at the robotic control device 200, which may include, for example, instructions for any application or method operating on the robotic control device 200, as well as application-related data, such as contact data, messaging, pictures, audio, video, and so forth. The Memory 202 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 203 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 202 or transmitted through the communication component 205. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 204 provides an interface between the processor 201 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 205 is used for wired or wireless communication between the robot controller 200 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 205 may thus comprise: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the robot controller 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described robot control method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the robot control method described above is also provided. For example, the computer readable storage medium may be the above-mentioned memory 202 comprising program instructions executable by the processor 201 of the robot control device 200 to perform the above-mentioned robot control method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the network transmission control method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (12)

1. A robot control method, characterized in that the method comprises:
acquiring an environment vision image acquired by the robot;
analyzing the scene type represented by the environment visual image;
determining a target working mode corresponding to the scene type from a plurality of preset working modes, wherein different working modes are used for controlling a detection device of the robot according to different output power and/or detection frequency;
and controlling the detection device according to the target working mode.
2. The method of claim 1, wherein analyzing the type of scene characterized by the environmental visual image comprises:
and determining the scene type according to the number of the objects in the environment visual image and/or whether a target obstacle exists on a preset path of the robot.
3. The method of claim 2, wherein determining the scene type according to the number of objects in the ambient visual image comprises:
determining that the environment visual image represents a first preset scene type under the condition that the number of the objects is greater than or equal to a preset number;
and under the condition that the number of the objects is smaller than the preset number, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is larger than that of the second preset scene type.
4. The method of claim 2, wherein determining the scene type based on whether a target obstacle is present on the preset path of the robot comprises:
determining that the environment visual image represents a first preset scene type under the condition that a target obstacle exists on a preset path of the robot;
and under the condition that the target obstacle does not exist on the preset path of the robot, determining that the environment visual image represents a second preset scene type, wherein the complexity of the first preset scene type is greater than that of the second preset scene type.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
determining a first working mode as the target working mode under the condition that the environment visual image is determined to represent a first preset scene type;
accordingly, in case it is determined that the ambient visual image characterizes a second preset scene type, a second operation mode is determined as the target operation mode, wherein the output power in the first operation mode is larger than the output power in the second operation mode.
6. The method of claim 3 or 4, wherein said acquiring the robot-captured environmental visual image comprises:
acquiring the environment visual image according to a preset period;
the method further comprises the following steps:
reducing the preset period under the condition that the environment visual image is determined to represent the first preset scene type;
increasing the preset period if it is determined that the environmental visual image represents the second preset scene type.
7. The method of claim 4, further comprising:
determining that the target obstacle exists on a preset path of the robot under the conditions that the obstacle exists on the preset path of the robot, the obstacle is a static obstacle, and the size of the obstacle is larger than a preset threshold value; or,
and determining that the target obstacle exists on a preset path of the robot under the condition that the obstacle is a dynamic obstacle.
8. The method of claim 5, wherein the magnitude of the output power in the first mode of operation is positively correlated to the magnitude of the number of objects.
9. The method of claim 1, wherein the detection device comprises a lidar.
10. A robot control apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an environment visual image acquired by the robot;
the analysis module is used for analyzing the scene type represented by the environment visual image;
the determining module is used for determining a target working mode corresponding to the scene type from a plurality of preset working modes, and different working modes are used for controlling the detecting device of the robot according to different output powers and/or detecting frequencies;
and the control module is used for controlling the detection device according to the target working mode.
11. A robot control apparatus, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202210167851.XA 2022-02-23 2022-02-23 Robot control method, device and storage medium Pending CN114571450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210167851.XA CN114571450A (en) 2022-02-23 2022-02-23 Robot control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210167851.XA CN114571450A (en) 2022-02-23 2022-02-23 Robot control method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114571450A true CN114571450A (en) 2022-06-03

Family

ID=81770498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210167851.XA Pending CN114571450A (en) 2022-02-23 2022-02-23 Robot control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114571450A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108375775A (en) * 2018-01-17 2018-08-07 上海禾赛光电科技有限公司 The method of adjustment of vehicle-mounted detection equipment and its parameter, medium, detection system
CN109017802A (en) * 2018-06-05 2018-12-18 长沙智能驾驶研究院有限公司 Intelligent driving environment perception method, device, computer equipment and storage medium
CN111624597A (en) * 2020-06-03 2020-09-04 中国科学院沈阳自动化研究所 Ice structure detection robot and detection method
CN111999720A (en) * 2020-07-08 2020-11-27 深圳市速腾聚创科技有限公司 Laser radar parameter adjustment method, laser radar system, and computer storage medium
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
WO2021248857A1 (en) * 2020-06-08 2021-12-16 特斯联科技集团有限公司 Obstacle attribute discrimination method and system, and intelligent robot
CN114018236A (en) * 2021-09-30 2022-02-08 哈尔滨工程大学 Laser vision strong coupling SLAM method based on adaptive factor graph

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108375775A (en) * 2018-01-17 2018-08-07 上海禾赛光电科技有限公司 The method of adjustment of vehicle-mounted detection equipment and its parameter, medium, detection system
CN109017802A (en) * 2018-06-05 2018-12-18 长沙智能驾驶研究院有限公司 Intelligent driving environment perception method, device, computer equipment and storage medium
CN111624597A (en) * 2020-06-03 2020-09-04 中国科学院沈阳自动化研究所 Ice structure detection robot and detection method
WO2021248857A1 (en) * 2020-06-08 2021-12-16 特斯联科技集团有限公司 Obstacle attribute discrimination method and system, and intelligent robot
CN111999720A (en) * 2020-07-08 2020-11-27 深圳市速腾聚创科技有限公司 Laser radar parameter adjustment method, laser radar system, and computer storage medium
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
CN114018236A (en) * 2021-09-30 2022-02-08 哈尔滨工程大学 Laser vision strong coupling SLAM method based on adaptive factor graph

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王耀南等: "《移动作业机器人感知、规划与控制》", 国防工业出版社, pages: 257 - 258 *

Similar Documents

Publication Publication Date Title
US9757861B2 (en) User interface device of remote control system for robot device and method using the same
US20190084161A1 (en) Robot control apparatus, system and method
CN110245567B (en) Obstacle avoidance method and device, storage medium and electronic equipment
CN112135553B (en) Method and apparatus for performing cleaning operations
CN105208215A (en) Locating control method, device and terminal
CN111988524A (en) Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium
CN112600994B (en) Object detection device, method, storage medium, and electronic apparatus
US11004317B2 (en) Moving devices and controlling methods, remote controlling systems and computer products thereof
JP2016224547A (en) Image processing apparatus, image processing system, and image processing method
JP2020513627A (en) Intelligent guidance method and device
CN112584015B (en) Object detection method, device, storage medium and electronic equipment
CN114199268A (en) Robot navigation and guidance method and device based on voice prompt and guidance robot
JP6789905B2 (en) Information processing equipment, information processing methods, programs and communication systems
CN114571450A (en) Robot control method, device and storage medium
CN105682021A (en) Information processing method and electronic device
CN115428043A (en) Image recognition device and image recognition method
CN114659450B (en) Robot following method, device, robot and storage medium
US20210264638A1 (en) Image processing device, movable device, method, and program
KR20200094451A (en) Apparatus and method for processing image and service robot
WO2021002465A1 (en) Information processing device, robot system, and information processing method
JP7476563B2 (en) OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND OBJECT TRACKING PROGRAM
US10580144B2 (en) Method and system for tracking holographic object
CN112598645A (en) Contour detection method, apparatus, device and storage medium
KR20160123888A (en) System and method for work instruction
CN106604289B (en) Wireless signal transmission system and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination