CN116869408A - Interaction method and electronic equipment - Google Patents

Interaction method and electronic equipment Download PDF

Info

Publication number
CN116869408A
CN116869408A CN202310485920.6A CN202310485920A CN116869408A CN 116869408 A CN116869408 A CN 116869408A CN 202310485920 A CN202310485920 A CN 202310485920A CN 116869408 A CN116869408 A CN 116869408A
Authority
CN
China
Prior art keywords
task
tasks
cleaning robot
voice
cleaning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310485920.6A
Other languages
Chinese (zh)
Inventor
张少华
劳鹏飞
叶力荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Group Co Ltd filed Critical Shenzhen Silver Star Intelligent Group Co Ltd
Priority to CN202310485920.6A priority Critical patent/CN116869408A/en
Publication of CN116869408A publication Critical patent/CN116869408A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Electric Vacuum Cleaner (AREA)

Abstract

The embodiment of the application relates to the technical field of robots and discloses an interaction method and electronic equipment, wherein a voice instruction is acquired by responding to a wake-up signal; and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list. Finally, based on the task list, the cleaning robot is controlled to execute tasks in sequence. In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.

Description

Interaction method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of robots, in particular to an interaction method and electronic equipment.
Background
Cleaning robots are also being developed in a stepwise manner as an important device in smart homes, for example, cleaning robots are increasingly interacting with people or pets. Meanwhile, with the development of natural language processing technology, cleaning robots with voice control function are gradually rising. The cleaning robot with the voice control function can automatically recognize a voice command uttered by a user and then perform a corresponding operation in response to the recognition result of the voice command.
However, when there are a plurality of tasks among the voice commands or a plurality of voice commands in which the task is different are issued for a period of time, the cleaning robot is not intelligent enough.
Disclosure of Invention
In view of this, the interaction method and the electronic device provided by some embodiments of the present application can respond to the voice command, intelligently execute the task, and realize a more intelligent man-machine interaction effect.
In a first aspect, some embodiments of the present application provide an interaction method, including:
responding to the wake-up signal, and acquiring a voice instruction;
identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list;
and controlling the cleaning robot to sequentially execute the tasks based on the task list.
In some embodiments, the prioritizing the tasks in the task set to obtain a task list includes:
and ordering the tasks which indicate cleaning in the task set from the near to the far according to the distance of the cleaning target to obtain a task list.
In some embodiments, prioritizing tasks in a task set to obtain a task list includes:
and sequencing the tasks in the task set according to the default grade, and sequencing the tasks with the same default grade according to the sequence indicated by the voice instruction to obtain a task list.
In some embodiments, the method further comprises:
during the execution of a certain target task, if a new voice command is received, identifying a new task from the new voice command;
and controlling the cleaning robot to execute the task according to the newly added task and the target task.
In some embodiments, the foregoing controlling the cleaning robot to perform the task according to the newly added task and the target task includes:
if execution conflict exists between the newly added task and the target task, sending out a voice inquiry, wherein the voice inquiry is used for inquiring the execution sequence between the target task and the newly added task;
and responding to the voice reply, and controlling the cleaning robot to sequentially execute tasks according to the voice reply.
In some embodiments, the foregoing controlling the cleaning robot to perform the task according to the newly added task and the target task includes:
if no execution conflict exists between the newly added task and the target task, the cleaning robot is controlled to execute the target task and execute the newly added task.
In some embodiments, the method further comprises: responding to the wake-up signal according to the tone priority; wherein, tone priority is preset.
In some embodiments, the method further comprises:
Acquiring image information and/or voice information;
if the dangerous event is identified from the image information and/or the voice information, the sharing information reflecting the dangerous event is sent to the terminal, so that the terminal sends reminding information for reminding the user.
In a second aspect, some embodiments of the present application provide an electronic device, including:
at least one processor;
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the interaction method as in the first aspect.
In a third aspect, some embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for causing a computer device to perform the interaction method of the first aspect.
The embodiment of the application has the beneficial effects that: different from the situation in the prior art, the interactive method applied to the cleaning robot provided by the embodiment of the application obtains the voice command by responding to the wake-up signal; and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list. Finally, based on the task list, the cleaning robot is controlled to execute tasks in sequence. In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a schematic diagram of an application scenario of an interaction method according to some embodiments of the present application;
FIG. 2 is a flow chart of an interaction method according to some embodiments of the application;
FIG. 3 is a flow chart of an interaction method according to other embodiments of the present application;
FIG. 4 is a flow chart of an interaction method according to other embodiments of the present application;
FIG. 5 is a flow chart of an interaction method according to other embodiments of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to some embodiments of the application.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that, if not in conflict, the features of the embodiments of the present application may be combined with each other, which is within the protection scope of the present application. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Moreover, the words "first," "second," "third," and the like as used herein do not limit the data and order of execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
In addition, the technical features of the embodiments of the present application described below may be combined with each other as long as they do not collide with each other.
Fig. 1 is a schematic diagram of an application scenario of an interaction method according to an embodiment of the present application. The application scene comprises a cleaning robot 10, a server 20 and a terminal 30, wherein the cleaning robot 10 is in communication connection with the server 20, and the server 20 is in communication connection with the terminal 30.
The server 20 may be a physical server, a cloud server, or the like. The terminal 30 may be a mobile terminal such as a smart phone, tablet computer, etc.
In some embodiments, the terminal 30 downloads an application program (app) related to the cleaning robot 10, and a back end module of the application program (app) is provided in the server 20. Thus, the user sets, for example, the use authority and the like for the cleaning robot 10 by manipulating the application program on the terminal 30. The cleaning robot 10 may transmit the shared information to the server 20, and the server 20 may issue the shared information to the terminal 30.
In this application scenario, the user is able to control the cleaning robot 10 by voice, i.e. to interact with the cleaning robot formulation by voice.
The interactive method of the embodiment of the present application may be applied to the cleaning robot 10. The following describes the technical principle of the voice-based interaction method.
The cleaning robot 10 is provided with a microphone (not shown) which is communicatively connected to a controller of the cleaning robot 10, so that the microphone can transmit the collected voice signal to the controller. The controller invokes the speech recognition system to perform semantic recognition and speaker recognition on the speech signal. Then, based on the recognition result, the cleaning robot 10 is controlled to perform a corresponding task such as cleaning, self-cleaning, recharging, or the like. The voice recognition system may be stored in a memory of the cleaning robot 10, or in another device (e.g., the server 20) communicatively coupled to the cleaning robot 10.
The speech recognition system may include one or more machine learning models, such as semantic recognition models, speaker verification models, speaker recognition models, speaker separation models, or sound source separation models, etc., based on deep learning training.
The semantic recognition model can perform semantic recognition on the voice signal, for example, "i want to clean the living room at 4 pm", then the semantic recognition model can recognize: character "me", time "4 pm", task "clean living room" these semantic information, then carry out the corresponding task based on these semantic information. It will be appreciated that the semantic recognition model is trained on neural networks using a large number of data sets, which may be collected by one skilled in the art or disclosed as speech data sets, without limitation. In some embodiments, a long and short term memory network (Long Short Term Memory, LSTM) is trained with a dataset to obtain a semantic recognition model.
The speaker verification model can determine whether the current speech signal is verified. For example, the cleaning robot 10 is granted to 4 persons, and when each person applies the right, a respective registration sound is recorded. The registered voices of these 4 persons constitute a voiceprint library. In an actual application scenario, when a microphone of the cleaning robot 10 collects a voice signal, the voice signal is sent to the controller, the controller calls a speaker verification model to identify whether the voice signal belongs to a voiceprint library, if so, the voice signal responds to the voice signal, if not, the voice signal does not respond, and the cleaning robot 10 cannot be controlled by the voice signal. For example, if the voice print library includes the registered voice of the female owner and the male owner and does not include the registered voice of the child, the speaker verification model verification is not passed and the cleaning robot 10 does not respond to the voice signal issued by the child for giving the command to the cleaning robot 10. And for the command issued by the female owner or the male owner, the speaker verification model passes the verification, and the cleaning robot 10 performs the corresponding task based on the semantic information identified by the semantic identification model.
In some embodiments, the speaker verification model includes a feature extraction module and a similarity calculation module having a plurality of convolution layers. The feature extraction module performs feature extraction on the voice signal, converts the voice signal into a high-dimensional feature vector, and the similarity calculation module calculates similarity between the feature vector and each registered sound in the voiceprint library. In some embodiments, the similarity calculation module includes comparing cosine values between vectors to calculate the similarity. If the maximum similarity is greater than the set threshold, determining that the voice signal belongs to the voiceprint library, and passing speaker verification.
The speaker recognition model can identify who the current speech signal was emitted from. In some embodiments, the speaker recognition model may be a further extension of the speaker verification model. For example, after the speaker verification is passed, the identity corresponding to the registered sound with the highest similarity is taken as the speaker.
The speaker separation model can separate signals corresponding to different speakers based on tone slicing from a segment of non-overlapping speech signals. It will be appreciated that the speaker separation model recognizes different voiceprint colors in the speech signal by voiceprint recognition, and then distinguishes the speaker. The speaker separation model is also trained on neural networks using a large number of data sets, which may be collected by one skilled in the art, or a public speech data set, such as the Mozilla Common Voice speech data set, without limitation. In some implementations, signals belonging to the aforementioned voiceprint library are extracted and a response or preferential response is made to that portion of the signals. For example, in a dialogue scenario, the speaker separation model can identify different speakers, and extract segments of speech from a speech signal that are spoken by a registrant (a person corresponding to a registered voice in a voiceprint library). The semantic recognition model recognizes semantic information based on the segments, and the cleaning robot 10 performs a corresponding task based on the semantic information.
The sound source separation model is capable of separating the sounds of different persons from the aliased speech signals. For example, in an environment where a plurality of people in a home are crowded, a voice signal corresponding to a person in the home is recognized, and thus, the semantic recognition model recognizes semantic information based on the voice signal corresponding to the person in the home. The controller controls the cleaning robot 10 to perform a corresponding task based on the semantic information. In some embodiments, the source separation model may be a model trained on a neural network using a large number of data sets, where the neural network may be a Deep learning network (Deep learning) or a U-Net network, or the like. In some embodiments, the sound source separation model may also be implemented using an audio decomposition technique, such as NMF (Non-negative Matrix Factorization), spark Coding, DICT, or the like.
As can be seen from the above, the controller in the cleaning robot 10 invokes the voice recognition system to perform semantic recognition and speaker recognition on the voice signal, and controls the cleaning robot 10 to perform the corresponding task based on the recognition result. Thus, voice interaction between the person and the cleaning robot 10 is achieved.
The cleaning robot 10 may be configured in any suitable shape to perform the cleaning operation. The cleaning robot 10 includes, but is not limited to, a sweeping robot, a dust collection robot, a mopping robot, a washing robot, or the like. In some embodiments, the cleaning robot 10 may be a cleaning robot that moves by itself based on a SLAM system.
In some embodiments, the cleaning robot 10 includes a body and drive wheel assembly, a camera, a sensor, a lidar, and a controller. The body may be generally oval, triangular, D-shaped or other shape in shape. The controller is arranged on the main body, the main body is a main body structure of the cleaning robot 10, and the main body can be provided with a corresponding shape structure and manufacturing materials (such as hard plastics or metals such as aluminum and iron) according to the actual requirement of the cleaning robot 10, for example, the main body is a flat cylinder common to the cleaning robot.
Wherein the driving wheel member is mounted to the main body for driving the cleaning robot 10 to move over the surface to be cleaned. In some embodiments, the drive wheel assembly includes a left drive wheel, a right drive wheel, and an omni wheel, the left and right drive wheels being mounted to opposite sides of the body, respectively. The omni-wheel is mounted at a forward position of the bottom of the main body, is a movable caster wheel, and can horizontally rotate 360 degrees, so that the cleaning robot 10 can flexibly turn. The left driving wheel, the right driving wheel and the omni wheel are installed to form a triangle so as to improve the walking stability of the cleaning robot 10.
In some embodiments, sensors are used to collect some motion parameters and environmental space data of the cleaning robot 10, including suitable sensors such as gyroscopes, infrared sensors, odometers, magnetic field meters, accelerometers or speedometers, and the like.
In some embodiments, the lidar is provided to the body of the cleaning robot 10, for example: the lidar is provided on a moving chassis of the body of the cleaning robot 10. The lidar is used to sense the condition of obstacles in the surrounding environment of the mobile robot 10, obtain the distance of surrounding objects, and send to the controller so that the controller controls the robot to walk based on the distance of the surrounding objects. In some embodiments, the lidar comprises a pulsed lidar, a continuous wave lidar, or the like, and the mobile chassis comprises a robotic mobile chassis such as a universal chassis, a vaulted mobile chassis, or the like.
In some embodiments, the controller is disposed inside the main body, is an electronic computing core built in the robot main body, and is configured to perform a logic operation step to implement intelligent control of the robot. The controller is electrically connected with the left driving wheel, the right driving wheel and the omnidirectional wheel respectively. The controller is used as a control core of the robot and is used for controlling the robot to walk, retreat and some business logic processes. For example: the controller is configured to receive a voice command sent by the microphone, and control the cleaning robot 10 to perform a corresponding task based on the voice command.
It is to be appreciated that the controller may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single-chip, ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. The controller may also be any conventional processor, controller, microcontroller, or state machine. A controller may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP and/or any other such configuration, or one or more of a micro-control unit (Microcontroller Unit, MCU), a Field-programmable gate array (Field-Programmable Gate Array, FPGA), a System on Chip (SoC).
It will be appreciated that the memory of the robot 10 in embodiments of the present application includes, but is not limited to: FLASH memory, NAND FLASH memory, vertical NAND FLASH memory (VNAND), NOR FLASH memory, resistive Random Access Memory (RRAM), magnetoresistive Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), spin transfer torque random access memory (STT-RAM), and the like.
It should be noted that, according to the task to be completed, in addition to the above functional modules, one or more other different functional modules (such as a water tank, a cleaning device, etc.) may be mounted on the main body of the cleaning robot 10, and cooperate with each other to perform the corresponding task.
Some interactive methods applied to cleaning robots known by the inventor recognize simple voice instructions, and when a plurality of tasks exist in the voice instructions or the voice instructions with different task categories are issued within a period of time, the cleaning robot cannot respond intelligently enough, for example, only the last task in the plurality of tasks can be executed, but cannot be executed according to the instruction of the voice instructions, or the execution sequence is disordered, so that the execution efficiency is affected.
In view of the above problems, an embodiment of the present application provides an interaction method, which is applied to a cleaning robot, and the method obtains a voice command by responding to a wake-up signal; and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list. Finally, based on the task list, the cleaning robot is controlled to execute tasks in sequence. In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.
It will be appreciated from the foregoing that the interaction method provided by embodiments of the present application may be implemented by a cleaning robot including a microphone, e.g., by a controller or processor of the cleaning robot or by other devices having computing processing capabilities, etc. Other devices with computing processing capabilities may be smart devices, such as servers, etc., communicatively coupled to the cleaning robot.
The following describes an interaction method provided by the embodiment of the present application in connection with exemplary applications and implementations of the cleaning robot provided by the embodiment of the present application. Referring to fig. 2, fig. 2 is a flow chart of an interaction method according to an embodiment of the application. It will be appreciated that the subject of execution of the interaction method may be one or more controllers of the cleaning robot.
As shown in fig. 2, the method S100 may specifically include the following steps:
s10: and responding to the wake-up signal, and acquiring a voice instruction.
The wake-up signal refers to a specific voice for waking up the cleaning robot. The specific voice is the pronunciation corresponding to the wake-up word. It will be appreciated that the cleaning robot requires the user to wake up the interactive functions of the cleaning robot by shouting a wake-up word to issue a wake-up signal before interacting with the user. The wake-up word may be the name of the cleaning robot, such as "star", etc. The interactive function refers to the interaction between the cleaning robot and the user through voice, namely, the cleaning robot feeds back the recognized voice.
It can be understood that, after the cleaning robot is started, the controller invokes the voice recognition system to perform real-time voice detection, and if a wake-up signal is detected, the controller responds to the wake-up signal. In some embodiments, voice feedback may be sent when the cleaning robot wakes up. For example, the controller controls the player in the cleaning robot to reply "what help is required by the owner me? ". And through voice feedback, the user is instructed to send out a voice instruction.
In some embodiments, the wake-up signal is responded to if the duration of the wake-up signal reaches a time threshold and the volume reaches a volume threshold. In some embodiments, if the volume of the wake-up signal reaches the volume threshold and the duration does not reach the time threshold, the controller controls the cleaning robot to actively ask, issue a voice feedback, such as "owner, you are calling me? ".
In some embodiments, the collected speech signal may be pre-processed, such as analog-to-digital converted, tip recognized, noise reduced or enhanced, before the speech recognition system performs real-time speech detection. In some embodiments, the pre-processed voice signal is subjected to secondary pre-processing such as audio signal framing, acoustic feature extraction, vectorization and the like, so as to convert the voice signal into a data format which is favorable for detection of a voice recognition system, thereby being favorable for improving the accuracy of detection.
In some embodiments, the speech recognition system for performing real-time speech detection includes a semantic recognition model, a speaker verification model, a speaker recognition model, a speaker separation model, or a sound source separation model, etc., and the training and working mechanisms of these models may be referred to the above description, and the detailed description is not repeated here.
In some scenarios, when the cleaning robot is in a sleep state, the controller controls the interactive functions of the cleaning robot to wake up after detecting the wake-up signal. The wind is collected and sent to the controller after the voice command is sent, and the controller calls the voice recognition system to recognize so as to recognize the task. In some scenarios, when the cleaning robot is in an operational state, i.e. is performing a task (e.g. cleaning task), the controller controls the interactive functions of the cleaning robot to wake up after detecting a wake-up signal. The wind is collected and sent to the controller after the voice command is sent, and the controller calls the voice recognition system to recognize so as to recognize a new task.
It can be understood that the voice command is a task command issued by the user to the cleaning robot in a voice form, for example, a voice command including a single task such as "cleaning living room at 4 pm", "please perform self-cleaning" or "please recharge", and a voice command including a plurality of tasks such as "cleaning living room and main sleeping", "cleaning living room post recharge" or "cleaning living room, post sleeping post recharge", etc.
The microphone collects the voice command and sends the voice command to the controller, so that the controller can acquire the voice command for subsequent recognition.
S20: and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list.
After the voice command is acquired, the controller invokes a semantic recognition model in the voice recognition system to recognize one or more tasks from the voice command, the one or more tasks constituting a task set. For example, the task set includes { "clean living room", "clean main sleeping" }, and for example, the task set includes { "clean living room", "clean secondary sleeping", "recharge" }. It is understood that there is no sequential relationship between the tasks in the task set, for example, there is no execution precedence relationship between "clean living room" and "clean main prone".
And then, carrying out priority sequencing on the tasks in the task set to obtain a task list. In the task list, each task has an execution precedence relation, and the tasks are ordered from high to low according to priority. For example, for task set { "clean living room", "clean secondary lying", "recharge" }, its corresponding task list may be [ "clean secondary lying", "clean living room", "recharge" ], i.e. clean secondary lying first, then clean living room, and finally return to the base station for recharge.
In some embodiments, the "prioritizing tasks in the task set to obtain a task list" includes:
s21: and ordering the tasks which indicate cleaning in the task set from the near to the far according to the distance of the cleaning target to obtain a task list.
It is understood that the task indicating cleaning refers to a task requiring the cleaning robot to perform cleaning operation, for example, the task sets of "cleaning living room", "cleaning recumbent", "recharging", and "cleaning recumbent" are the tasks indicating cleaning.
When only one task indicating cleaning is in the task set, the tasks indicating cleaning do not need to be ordered. When two or more tasks indicating cleaning exist in the task set, the tasks indicating cleaning are sequenced from the near to the far according to the distance of the cleaning target, and a task list is obtained. The cleaning target is a destination in a task for indicating cleaning, such as a living room, a primary lying or a secondary lying. The distance between the cleaning target and the current position of the cleaning robot, for example, the distance between the main lying position and the current position of the cleaning robot.
In some embodiments, the controller may calculate the distance of each cleaning target based on the current location and a map pre-stored in memory. It will be appreciated by those skilled in the art that the current location may be obtained by SLAM technology, and the map may be constructed by SLAM technology, which is a prior art in the art, and positioning and constructing the map by using the same is not described in detail herein.
After the distances of all the cleaning targets are obtained, sorting tasks indicating cleaning according to the distances from the near to the far of the cleaning targets to obtain a task list. The controller controls the cleaning robot to sequentially execute tasks according to the task sequence in the task list, so that the cleaning target close to the cleaning robot is cleaned preferentially. For example, for a voice command "clean main bedroom and living room", if the current position of the cleaning robot is the living room, i.e., is close to the living room, and is far from the main bedroom. After the microphone collects the voice command, the controller calls the semantic recognition model to recognize two tasks indicating cleaning, namely ' cleaning main sleeping ' and ' cleaning living room ', so as to obtain a task set of { cleaning main sleeping ', ' cleaning living room ', ' and then sequencing from near to far according to the distance, so as to obtain a task list of [ ' cleaning living room ', ' cleaning main sleeping ', '. Therefore, when the follow-up controller controls the cleaning robot to execute the task based on the task list, the cleaning robot preferentially cleans the living room, and then cleans the main bed after the completion.
In this embodiment, for the tasks indicating cleaning, the tasks are ordered from the near to the far, so that the task list is more reasonable, the tasks are executed according to the task list, the travel of the cleaning robot for switching the cleaning targets can be effectively shortened, the path is optimized, and the cleaning efficiency is improved.
It will be appreciated that for tasks in the task set that indicate a sweep, the prioritization is determined by the distance of the sweep target. In some embodiments, for other tasks in the task set that are not indicative of a sweep, such as recharging or self-cleaning, etc., the ordering may be at a default level. The default level is preset in the memory, and the controller can call the default level when sequencing the identified tasks. The description of the default level will be described in detail below.
In some embodiments, the "prioritizing tasks in the task set to obtain a task list" includes:
s22: and sequencing the tasks in the task set according to the default grade, and sequencing the tasks with the same default grade according to the sequence indicated by the voice instruction to obtain a task list.
The default grade is a grade weight preset for each task. It is to be understood that the memory stores a class weight table in advance, and the class weight table reflects the correspondence between each task and the default class. For example, the default level of cleaning is 3, the default level of self-cleaning is 3, the default level of recharging is 2, and so on.
That is, the controller can determine the default level corresponding to the task in any set by calling the level weight table and looking up a table. Thus, the tasks in the task set sum may be ordered by default level. For tasks of the same default level, the tasks are ordered in the order indicated by the voice instructions. For example, the voice command "clean living room first and then self-clean", the default level of cleaning and self-clean is 3, and the voice command indicates the sequence relationship between two tasks, so that the task list is ordered according to the sequence indicated by the voice command, and the task list is more in line with the intention of the user.
In this embodiment, the tasks in the task set are ordered according to the default level, so that the task list obtained by the ordering accords with the operation rule of the cleaning robot, and is more intelligent. And for tasks with the same default level, sorting the tasks according to the order indicated by the voice command, so that the task list better accords with the intention of the user.
S30: and controlling the cleaning robot to sequentially execute the tasks based on the task list.
It can be appreciated that in the task list, each task has an execution precedence relationship, and the tasks are ordered from high to low according to priority. Thus, the controller controls the cleaning robot to sequentially execute the tasks in the task list according to the task sequence in the task list. For example, for a task list [ "clean recumbent", "clean living room", "recharge" ], the controller controls the cleaning robot to clean recumbent first, then controls the cleaning robot to clean living room, and finally controls the cleaning robot to return to the base station for charging.
In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.
In some embodiments, referring to fig. 3, the method S100 further includes:
s40: during execution of a target task, if a new voice command is received, an added task is identified from the new voice command.
S50: and controlling the cleaning robot to execute the task according to the newly added task and the target task.
Wherein the target task is any one of the task lists. For example, the target task may be "clean living room". In some scenarios, when the cleaning robot is in the cleaning living room, if the user calls "star" (wake-up word), the cleaning robot wakes up the interactive function while cleaning the living room, and can reply "what help is required by the owner me? ". If the user sends a new voice command, such as "please clean bedroom", the microphone collects the new voice command and sends it to the controller. The controller invokes a semantic recognition model in the speech recognition system to recognize the newly added task from the new speech command. The newly added task can be a cleaning, recharging type task or a question-answer type task and the like.
In some scenarios, the user just issues a voice command, assigning a cleaning living room, and the cleaning robot identifies a target task "cleaning living room" from the voice command to initiate preparation for execution of the target task. The user issues a new voice command within a predetermined short time interval (e.g., within 5S) to assign a clean bedroom. The cleaning robot recognizes a new task of "cleaning bedroom" from the new voice command.
And after receiving the new task, controlling the cleaning robot to execute the task based on the new task and the target task. In some embodiments, the cleaning robot is controlled to continue to execute the target task being executed or ready to execute, and after the target task is completed, the new task is executed. In some embodiments, the cleaning robot is controlled to stop executing the target task and execute the new task.
In this embodiment, during execution of the target task, a newly added task can be considered so that the cleaning robot can respond to the user's needs in time. For example, when a user wants to change a task or add a task, the cleaning robot can respond to the user in time, and can respond to a new voice command without waiting for the user to finish the current target task.
In some embodiments, the step S50 specifically includes:
s51: if there is execution conflict between the newly added task and the target task, a voice inquiry is sent out. The voice query is used to query the order of execution between the target task and the newly added task.
S52: and responding to the voice reply, and controlling the cleaning robot to sequentially execute tasks according to the voice reply.
The execution conflict between the newly added task and the target task means that the cleaning robot cannot execute the target task and the newly added task at the same time, for example, the execution conflict exists between a cleaning living room and a cleaning bedroom, and the execution conflict exists between a cleaning study room and recharging.
For example, if the target task is a living room cleaning and the newly added task is a bedroom cleaning, then there is an execution conflict between the newly added task and the target task, and the controller controls the player to send out a voice query, for example, "please ask the owner whether to clean the living room before cleaning the bedroom? ". It will be appreciated that a voice query is used to query the order of execution between the target task and the newly added task.
The voice reply refers to a reply made by the user aiming at a voice inquiry sent by the cleaning robot. For example, the voice response may be "yes", and the cleaning robot gets a positive response, and performs tasks according to the voice response in sequence, that is, "perform cleaning living room first and then perform cleaning bedroom. If the cleaning robot gets a negative reply, the cleaning robot takes the newly added task as the main task, stops executing the target task and starts executing the newly added task.
In this embodiment, for the case that there is a execution conflict between the newly added task and the target task, the user's intention is further queried through voice query, and the task is executed according to the instruction of voice reply, so that the cleaning robot is more intelligent.
In some embodiments, the step S50 specifically includes:
s53: if no execution conflict exists between the newly added task and the target task, the cleaning robot is controlled to execute the target task and execute the newly added task.
It can be understood that no execution conflict exists between the newly added task and the target task means that the cleaning robot can execute the target task and the newly added task simultaneously, for example, the current target task is a cleaning task, the newly added task is a question-answer task, and the cleaning task and the question-answer task can be executed simultaneously.
And under the condition that the execution conflict does not exist between the new task and the target task, the controller controls the cleaning robot to execute the target task and simultaneously execute the new task. In some scenarios, when the cleaning robot is in the cleaning living room, the new task is a question-and-answer task, such as inquiring weather or time, the controller may control the cleaning robot to continue to perform the target task while answering the question, i.e., performing the new task.
In this embodiment, for the case that there is no execution conflict between the newly added task and the target task, the newly added task is executed while the target task is executed, so that the newly added task does not affect the priority of the target task, the target task does not need to be interrupted, and both tasks can be executed simultaneously.
In some embodiments, referring to fig. 4, the method S100 further includes:
s60: according to the tone color priority, a wake-up signal is responded.
The tone priority is preset and stored in a memory of the cleaning robot. It will be appreciated that timbre is one of the attributes of sound that reflects the characteristic quality of sound emitted by each object. Thus, the tone color can reflect the identity of a person.
In some embodiments, the tone priority may be set by application software (app) associated with the cleaning robot, e.g., sorting the registered sounds in the aforementioned voiceprint library to obtain a tone priority list. For example, the sound of a female owner is prioritized over the sound of a male owner, the sound of a male owner is prioritized over the sound of a child, etc.
It will be appreciated that the speaker recognition model in the speech recognition system is capable of recognizing from whom the current speech signal is emitted, i.e. is capable of recognizing the timbre of the current speech signal. A speaker separation model in a speech recognition system can separate signals corresponding to different speakers based on tone slicing from a segment of non-overlapping speech signals. A sound source separation model in a speech recognition system is capable of separating the sounds of different persons from aliased speech signals. For example, in an environment where a plurality of people in a house are crowded, a voice signal corresponding to the person in the house is recognized
In some scenarios, for example, when a plurality of users issue wake-up signals for a short period of time, and wake up the cleaning robot, for example, when a female owner and a child wake up the cleaning robot at the same time, the cleaning robot recognizes the sound of the female owner and the sound of the child based on the voice recognition system. Because the sound of the female owner is prioritized over the sound of the child in the set tone priority, the cleaning robot responds to the wake-up signal sent by the female owner preferentially and then responds to the voice command of the female owner. In the embodiment, by setting the tone color priority and responding to the tone color with high priority, the influence of children on task arrangement in noisy can be effectively prevented, and users with high priority can be accurately responded.
In some scenarios, in a crowd-crowded environment, a speech signal corresponding to a household person is identified, i.e., a timbre corresponding to the household person is identified. Thus, the cleaning robot responds preferentially to the wake-up signal sent by the home personnel, and then responds to the voice command of the home personnel. In the embodiment, by setting the tone color priority and responding to the tone color with high priority, the cleaning robot can be effectively prevented from being controlled by the non-recorded person, and users with high priority can be accurately responded.
In some embodiments, when a pet emits a sound, the cleaning robot responds to the sound, locates the sound, and controls the camera to take a picture of the pet for uploading to the server so that the user can view the picture of the pet from the associated application software (app) to learn about the pet.
In some scenarios, the person's timbre takes precedence over the pet's timbre. If a person issues a voice command and a pet calls, when a task in the voice command conflicts with a photographing task, the voice command issued by the person is responded preferentially, namely, the priority of the voice command issued by the person is higher than the priority of the pet photographing. When the task in the voice command does not conflict with the photographing task, the pet can be photographed while the task in the voice command is executed. In this embodiment, the cleaning robot can interact with the person and the pet without affecting the working efficiency while interacting.
In some embodiments, referring to fig. 5, the method S100 further includes:
s70: image information and/or voice information is acquired.
S80: if the dangerous event is identified from the image information and/or the voice information, the sharing information reflecting the dangerous event is sent to the terminal, so that the terminal sends reminding information for reminding the user.
The image information is collected by a camera arranged on the cleaning robot, and the camera sends the collected image information to the controller, so that the controller obtains the image information. The voice information is collected by a microphone, the microphone sends the voice information to a controller, a person and the controller obtains the voice information.
The dangerous event may be an event endangered in life, such as the elderly falling down, ponding or open fire. It will be appreciated that the cleaning robot or other device in communication with the cleaning robot has a set of dangerous events stored therein. When the identified event a is an event in the set of dangerous events, then the event a is determined to be a dangerous event.
In some embodiments, the controller detects the image information by using an existing image recognition model, and after the dangerous event is recognized, the controller sends sharing information to the terminal through the server, wherein the sharing information reflects the dangerous event. After receiving the sharing information, the terminal can remind the user in a popup window mode.
In some embodiments, the microphone continuously collects voice information, performs semantic recognition through a semantic recognition model in the voice recognition system, and if a specific keyword, such as "medicine", is recognized, sends sharing information to the terminal through the server, wherein the sharing information reflects the need of timely taking medicine. After receiving the sharing information, the terminal can remind the user to take medicine in time in a popup window mode.
It will be appreciated that in some embodiments, monitoring the hazardous event by voice and monitoring the hazardous event by image may be performed simultaneously, without limitation.
In the embodiment, the cleaning robot can identify the dangerous event by monitoring the voice information and/or the image information, and can timely feed back the dangerous event to the user after the dangerous event is identified, so that the dangerous event can be timely fed back, and the occurrence of the harm is reduced.
In summary, according to the interaction method applied to the cleaning robot provided by the embodiment of the application, the voice instruction is obtained by responding to the wake-up signal; and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list. Finally, based on the task list, the cleaning robot is controlled to execute tasks in sequence. In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.
The embodiment of the application also provides an electronic device, please refer to fig. 6, and fig. 6 is a schematic hardware structure of the electronic device according to the embodiment of the application. In some embodiments, the electronic device is a cleaning robot. In some embodiments, the electronic device may also be a smart device, such as a server or the like, communicatively coupled to the cleaning robot.
As shown in fig. 6, the electronic device 300 comprises at least one processor 301 and a memory 302 (bus connection, one processor being an example in fig. 6) in communication connection.
Wherein the processor 301 is configured to provide computing and control capabilities for controlling the electronic device 300 to perform corresponding tasks, for example, for controlling the electronic device 300 to perform the interaction method in any of the above-described method embodiments, the method comprising: acquiring a voice instruction by responding to the wake-up signal; and identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list. Finally, based on the task list, the cleaning robot is controlled to execute tasks in sequence.
In this embodiment, when there is a task in the voice command, the cleaning robot is controlled to execute the corresponding task. When a plurality of tasks exist in the voice instruction, the tasks are subjected to priority ranking, the tasks with high priority are executed first, and the tasks with low priority are executed later, so that the cleaning robot can intelligently execute the tasks, and a more intelligent human-computer interaction effect is realized.
The processor 301 may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a hardware chip, or any combination thereof; it may also be a digital signal processor (Digital Signal Processing, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The memory 302 serves as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the interaction methods in embodiments of the present application. The processor 301 may implement the interaction method in any of the above method embodiments by running non-transitory software programs, instructions and modules stored in the memory 302, and will not be described herein again to avoid repetition.
In particular, the memory 302 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); the memory 302 may also include a non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HDD) or Solid State Drive (SSD), or other non-transitory solid state storage devices; memory 302 may also include a combination of the types of memory described above.
In an embodiment of the application, the memory 302 may also include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the embodiment of the present application, the electronic device 300 may further have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Embodiments of the present application also provide a computer readable storage medium, such as a memory, including program code executable by a processor to perform the interaction method of the above embodiments. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CDROM), magnetic tape, floppy disk, optical data storage device, etc.
Embodiments of the present application also provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The program code is read from the computer readable storage medium by a processor of the electronic device, which executes the program code to carry out the method steps of the interaction method provided in the above-described embodiments.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order, and there are many other variations of the different aspects of the application as described above, which are not provided in detail for the sake of brevity; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (10)

1. An interaction method applied to a cleaning robot, comprising the following steps:
responding to the wake-up signal, and acquiring a voice instruction;
identifying a task set from the voice instruction, and sequencing the tasks in the task set in priority to obtain a task list;
and controlling the cleaning robot to sequentially execute tasks based on the task list.
2. The method of claim 1, wherein prioritizing tasks in the task set to obtain a task list comprises:
and sequencing the tasks which indicate cleaning in the task set from the near to the far according to the distance of the cleaning target to obtain the task list.
3. The method of claim 1, wherein prioritizing tasks in the task set to obtain a task list comprises:
and sequencing the tasks in the task set according to a default grade, and sequencing the tasks with the same default grade according to the sequence indicated by the voice instruction to obtain the task list.
4. The method according to claim 1, wherein the method further comprises:
during the execution of a certain target task, if a new voice instruction is received, identifying a new task from the new voice instruction;
and controlling the cleaning robot to execute tasks according to the newly added tasks and the target tasks.
5. The method of claim 4, wherein controlling the cleaning robot to perform tasks based on the newly added task and the target task comprises:
If execution conflict exists between the new task and the target task, sending out a voice inquiry, wherein the voice inquiry is used for inquiring the execution sequence between the target task and the new task;
responding to the voice reply, and controlling the cleaning robot to execute tasks in sequence according to the voice reply.
6. The method of claim 4, wherein controlling the cleaning robot to perform tasks based on the newly added task and the target task comprises:
and if the execution conflict does not exist between the new task and the target task, controlling the cleaning robot to execute the new task while executing the target task.
7. The method according to any one of claims 1-6, further comprising:
responding to the wake-up signal according to tone priority; wherein the tone priority is preset.
8. The method according to any one of claims 1-6, further comprising:
acquiring image information and/or voice information;
and if the dangerous event is identified from the image information and/or the voice information, sending sharing information reflecting the dangerous event to a terminal so that the terminal sends reminding information for reminding a user.
9. An electronic device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the interaction method of any of claims 1-8.
10. A computer readable storage medium storing computer executable instructions for causing a computer device to perform the interaction method of any of claims 1-8.
CN202310485920.6A 2023-04-28 2023-04-28 Interaction method and electronic equipment Pending CN116869408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310485920.6A CN116869408A (en) 2023-04-28 2023-04-28 Interaction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310485920.6A CN116869408A (en) 2023-04-28 2023-04-28 Interaction method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116869408A true CN116869408A (en) 2023-10-13

Family

ID=88266772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310485920.6A Pending CN116869408A (en) 2023-04-28 2023-04-28 Interaction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116869408A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117114249A (en) * 2023-10-24 2023-11-24 广州知韫科技有限公司 Task planning and response system based on language model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117114249A (en) * 2023-10-24 2023-11-24 广州知韫科技有限公司 Task planning and response system based on language model
CN117114249B (en) * 2023-10-24 2024-01-26 广州知韫科技有限公司 Task planning and response system based on language model

Similar Documents

Publication Publication Date Title
JP6816767B2 (en) Information processing equipment and programs
US8209179B2 (en) Speech communication system and method, and robot apparatus
KR102577785B1 (en) Cleaning robot and Method of performing task thereof
CN106406119B (en) Service robot based on interactive voice, cloud and integrated intelligent Household monitor
US11501794B1 (en) Multimodal sentiment detection
US20200097012A1 (en) Cleaning robot and method for performing task thereof
US20180231653A1 (en) Entity-tracking computing system
US20180082682A1 (en) Aerial drone companion device and a method of operating an aerial drone companion device
KR20200084449A (en) Cleaning robot and Method of performing task thereof
WO2002099545A1 (en) Man-machine interface unit control method, robot apparatus, and its action control method
US11654554B2 (en) Artificial intelligence cleaning robot and method thereof
WO2021136131A1 (en) Information recommendation method and related device
WO2020015682A1 (en) System and method for controlling unmanned aerial vehicle
CN116869408A (en) Interaction method and electronic equipment
US11531789B1 (en) Floor plan generation for device visualization and use
US20190259384A1 (en) Systems and methods for universal always-on multimodal identification of people and things
JP2020151070A (en) Robot and control method of robot
KR102612822B1 (en) Controlling method for Artificial intelligence Moving robot
Shandu et al. AI based pilot system for visually impaired people
KR20230134109A (en) Cleaning robot and Method of performing task thereof
US11986959B2 (en) Information processing device, action decision method and program
CN211484452U (en) Self-moving cleaning robot
WO2023020269A1 (en) Self-moving robot control method and apparatus, device, and readable storage medium
CN111950431B (en) Object searching method and device
JP7501523B2 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination