CN111475018B - Control method and device of sweeping robot, sweeping robot and electronic equipment - Google Patents

Control method and device of sweeping robot, sweeping robot and electronic equipment Download PDF

Info

Publication number
CN111475018B
CN111475018B CN202010214459.7A CN202010214459A CN111475018B CN 111475018 B CN111475018 B CN 111475018B CN 202010214459 A CN202010214459 A CN 202010214459A CN 111475018 B CN111475018 B CN 111475018B
Authority
CN
China
Prior art keywords
user
sweeping robot
gesture
current area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010214459.7A
Other languages
Chinese (zh)
Other versions
CN111475018A (en
Inventor
孙秀丹
陈高
仲丽君
刘坤
陈功
马雅奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202010214459.7A priority Critical patent/CN111475018B/en
Publication of CN111475018A publication Critical patent/CN111475018A/en
Application granted granted Critical
Publication of CN111475018B publication Critical patent/CN111475018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method and device of a sweeping robot, the sweeping robot and electronic equipment, and belongs to the technical field of intelligent household appliances. The method comprises the following steps: detecting a user gesture in the process of constructing the map; if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation; and executing the first moving operation to map the current area. By adopting the technical scheme provided by the application, the problem of poor map construction accuracy of the sweeping robot can be solved.

Description

Control method and device of sweeping robot, sweeping robot and electronic equipment
Technical Field
The application relates to the technical field of intelligent household appliances, in particular to a control method and device of a sweeping robot, the sweeping robot and electronic equipment.
Background
Before a room is cleaned for the first time, the sweeping robot needs to map the room to be cleaned, that is, a map of the room to be cleaned is obtained, and then the sweeping robot can plan a path based on the obtained map and clean the room according to the planned path.
In the related art, the sweeping robot can map the current area in the moving process, and the specific processing process comprises the following steps: the sweeping robot can acquire a plurality of images containing the current area, and performs image recognition on the plurality of images to obtain the distribution condition of the object in the current area. The sweeping robot can determine the moving direction in the current area based on the distribution condition of the objects and then move according to the moving direction.
However, during the mapping process, the sweeping robot may enter a complex area in the room, wherein the complex area may be: the edges, corners, areas where multiple obstacles are placed of the room. Due to the limitation of the identification accuracy of the image identification technology, the sweeping robot cannot determine the distribution condition of objects in the complex area, so that the moving direction in the complex area cannot be determined. At this time, the sweeping robot marks the complex area as failing to pass and does not map the complex area, so that the mapping of the room to be swept cannot be accurately completed.
Disclosure of Invention
An object of the embodiments of the present application is to provide a control method and apparatus for a sweeping robot, an electronic device, and a storage medium, so as to solve the problem of poor map construction accuracy of the sweeping robot.
The specific technical scheme is as follows:
in a first aspect, a control method of a sweeping robot is provided, the method including:
detecting a user gesture in the process of constructing the map;
if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation;
and executing the first moving operation to map the current area.
Optionally, after the executing the first moving operation to perform the mapping process on the current area, the method further includes:
if a voice instruction is received, determining a second moving operation indicated by the voice instruction;
and executing the second moving operation to map the current area.
Optionally, before detecting the user gesture, the method further includes:
acquiring an image containing a current area;
determining whether the current region meets a complex region judgment condition by performing image recognition on the image;
and if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture.
Optionally, the determining whether the current region meets the complex region determination condition by performing image recognition on the image includes:
carrying out image recognition on the image to obtain a recognition result of the current area;
and if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
Optionally, the determining whether the current region meets the complex region determination condition by performing image recognition on the image includes:
and if the recognition time required for image recognition of the image exceeds a preset recognition time threshold, judging that the current region meets the complex region judgment condition.
In a second aspect, a sweeping robot is provided, comprising a processing component, a camera component and a mapping component, wherein,
the map construction component is used for carrying out map construction processing on the current area;
the camera shooting component is used for shooting a user to obtain a user image comprising a user gesture;
the processing component is used for detecting the user gesture based on the user image in the process of map construction; if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation; and executing the first moving operation so that the map building component carries out map building processing on the current area.
Optionally, the camera component is a wide-angle camera.
Optionally, the processing unit is further configured to determine, when a voice instruction is received, a second moving operation indicated by the voice instruction; and executing the second moving operation to map the current area.
Optionally, the processing unit is further configured to acquire an image including a current region; determining whether the current region meets a complex region judgment condition by performing image recognition on the image; and if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture.
Optionally, the processing component is specifically configured to perform image recognition on the image to obtain a recognition result of the current region; and if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
Optionally, the processing unit is specifically configured to determine that the current region satisfies the complex region determination condition when the recognition time required for performing image recognition on the image exceeds a preset recognition time threshold.
In a third aspect, there is provided a control apparatus for a sweeping robot, the apparatus comprising:
the detection module is used for detecting user gestures in the process of constructing the map;
the first determining module is used for determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between a preset control gesture and the moving operation when the user gesture is the preset control gesture;
and the execution module is used for executing the first moving operation so as to carry out map construction processing on the current area.
Optionally, the apparatus further comprises:
the second determination module is used for determining a second movement operation indicated by the voice instruction when the voice instruction is received;
the execution module is further configured to execute the second moving operation to perform mapping processing on the current area.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring an image containing a current area;
a third determining module, configured to determine whether the current region meets a complex region determination condition by performing image recognition on the image;
and the output module is used for outputting control request information when the current area meets the complex area judgment condition so as to remind a user of making a user gesture.
Optionally, the third determining module includes:
the image recognition submodule is used for carrying out image recognition on the image to obtain a recognition result of the current area;
and the judging submodule is used for judging that the current area meets the complex area judging condition when the identification result shows that the current area is the complex area.
Optionally, the third determining module is specifically configured to determine that the current region meets the complex region determination condition when the recognition time required for performing image recognition on the image exceeds a preset recognition time threshold.
In a fourth aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to, when executing the computer program, implement the method steps of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of the first aspect described above.
In a sixth aspect, embodiments of the present application further provide a computer program product containing instructions, which when executed on a computer, cause the computer to perform the method steps of the first aspect.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a control method and device of a sweeping robot, the sweeping robot and electronic equipment, and the user gesture is detected in the map building process; if the user gesture is a preset control gesture, determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the moving operation; a first move operation is performed to map the current area. Because the first moving operation corresponding to the preset control gesture can be executed under the condition that the user gesture is the preset control gesture, the map construction processing can be carried out on the current area, and the map construction accuracy can be improved.
Of course, not all of the above advantages need be achieved in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a control method of a sweeping robot according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a control method of a sweeping robot according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a sweeping robot provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of a control device of a sweeping robot according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a control method of a sweeping robot, which can be applied to sweeping robots. The sweeping robot may map the room before sweeping the room for the first time, i.e., obtain a map of the room.
By adopting the technical scheme provided by the embodiment of the application, in the process of mapping the room, the sweeping robot can realize quick and accurate mapping aiming at each area in the room, particularly at the complex areas which are difficult to identify and easy to identify errors in the related technology. Wherein the complex region may be: the edges, corners, areas where multiple obstacles are placed of the room.
Subsequently, the sweeping robot can plan a sweeping path based on the acquired map, and sweep the room according to the sweeping path, so that a sanitary dead angle is avoided, and the sweeping effect can be improved.
The control method of the sweeping robot provided in the embodiments of the present application will be described in detail with reference to specific embodiments, as shown in fig. 1, the specific steps are as follows:
step 101, detecting a user gesture in the process of constructing the map.
In implementation, in the process of map construction, the sweeping robot can shoot a user to obtain a user image. Then, the sweeping robot can perform image recognition on the user image to obtain the user gesture.
In the embodiment of the application, the time for the sweeping robot to detect the gesture of the user can be diversified. In one possible implementation, the sweeping robot may detect user gestures in real-time. In another possible implementation manner, the sweeping robot may detect the user gesture after the current area meets the complex area determination condition, and the detailed description will be given later on in the specific processing procedure.
And 102, if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation.
The sweeping robot can be stored with a corresponding relationship between a preset control gesture and a moving operation in advance, and the moving operation can be 'moving left', 'moving east', 'turning left by 45 degrees', 'moving forward', 'backing', 'stopping', and the like. The preset user gesture may be various, for example, the preset user gesture may be one of a left palm slide, a right palm slide, a fist, a palm lift, and a palm open. The preset corresponding relationship between the control gesture and the movement operation may be: presetting a control gesture as a left sliding of a palm, and moving the palm to the left corresponding to the left sliding; the preset control gesture is that the palm is lifted, and the corresponding movement operation of the palm lifting is forward.
In implementation, the sweeping robot determines whether the user gesture is a preset control gesture, and if the user gesture is the preset control gesture, the sweeping robot may search for a moving operation corresponding to the user gesture in a corresponding relationship between the preset control gesture and the moving operation to obtain a first moving operation. If the user gesture is not a preset control gesture, the sweeping robot may not perform subsequent processing.
In the description of the present application, the directions indicated by "left", "right", "forward", and "backward" are the directions indicated based on the current position of the sweeping robot, and the directions indicated by "east", "south", "west", and "north" are the directions indicated based on the geographical position of the sweeping robot, and are only for convenience of describing the present application and do not require that the present application must operate in a specific direction, and therefore, should not be construed as limiting the present application.
And 103, executing a first moving operation to perform map construction processing on the current area.
In an implementation, the sweeping robot may perform a first movement operation. In the course of performing the first moving operation, the sweeping robot may perform a mapping process on the current area.
For example, where the current area is a gap between two pieces of furniture, the sweeping robot may detect a user gesture. After the gesture of the user is determined to be the preset control gesture of 'palm raising', the sweeping robot can execute a first moving operation 'advancing' corresponding to 'palm raising', and in the process of moving forwards, the sweeping robot can perform map construction processing on the current area.
In this application embodiment, the mode that robot of sweeping the floor carries out map construction and handles includes but not limited to, image acquisition, sonar collection, infrared collection, radar collection etc. and specific processing procedure is no longer repeated in this application.
In the embodiment of the application, the sweeping robot can detect the gesture of a user in the process of constructing the map; if the user gesture is a preset control gesture, determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the moving operation; a first move operation is performed to map the current area. Because the first moving operation corresponding to the preset control gesture can be executed under the condition that the user gesture is the preset control gesture, the map construction processing can be carried out on the current area, and the map construction accuracy can be improved.
Furthermore, by adopting the technical scheme provided by the embodiment of the application, the sweeping robot can be guided to turn and move in a region with crowded furniture placement, and map construction can be completed more quickly and accurately.
Optionally, after the user gesture is made, the user can also change the moving operation of the sweeping robot by sending a voice instruction. As shown in fig. 2, the specific processing procedure includes:
step 201, if a voice instruction is received, determining a second moving operation indicated by the voice instruction.
In implementation, during the first moving operation performed by the sweeping robot, if the first moving operation is different from the moving operation expected when the user makes a gesture, or the user wants to change the moving operation of the sweeping robot, the user may issue a voice instruction. Therefore, the sweeping robot can receive the voice instruction sent by the user. Then, the sweeping robot can perform semantic analysis on the voice instruction to obtain a second moving operation indicated by the voice instruction.
For example, the sweeping robot receives the voice command "turn 45 degrees to the left" during the process of executing the first moving operation "move left", and the sweeping robot may determine that the second moving operation indicated by the voice command is "turn 45 degrees to the left".
Step 202, performing a second moving operation to perform a mapping process on the current area.
In an implementation, the sweeping robot may perform the second moving operation. In the course of performing the second moving operation, the sweeping robot may perform a mapping process on the current area.
In this application embodiment, the sweeping robot may determine the second moving operation indicated by the voice instruction when receiving the voice instruction. Then, a second moving operation is performed to perform a map construction process on the current area. Therefore, the sweeping robot can change the moving operation in a mode of receiving the voice instruction, and therefore the voice control function is achieved. Furthermore, the control priority of the voice instruction is set on the control priority of the user gesture, and after the first movement operation is different from the expected movement operation when the user gesture is made by the user, namely after a gesture recognition error occurs, the error is quickly corrected, so that the user can more quickly and accurately control the running state of the floor sweeping robot, and the gesture control function is optimized.
Optionally, the sweeping robot may determine whether the current region meets the complex region determination condition, and detect the user gesture when the current region meets the complex region determination condition, where the specific processing procedure includes the following steps:
step 1, obtaining an image containing a current area.
In implementation, the sweeping robot can shoot the current area through a preset monocular camera or a preset binocular camera, or through a preset wide-angle camera, so as to obtain an image containing the current area.
And 2, identifying the image to determine whether the current area meets the complex area judgment condition.
Wherein the complex region determination condition is related to the recognition time and/or the recognition result of the image recognition.
In implementation, the sweeping robot can perform image recognition on the image containing the current area. Then, the sweeping robot may determine whether the current area satisfies the complex area determination condition based on the recognition result of the image recognition of the image, and the detailed description of the specific processing procedure will be described later.
Or, the sweeping robot may determine whether the current area satisfies the complex area determination condition based on the recognition time required for image recognition of the image, and the detailed description will be made later on in the specific processing procedure.
Or, the sweeping robot may further determine whether the current area satisfies the complex area determination condition in combination with the recognition time required for image recognition of the image and the recognition result of the image recognition, and the detailed description will be given later on the specific processing procedure.
If the current area meets the complex area judgment condition, the sweeping robot can execute the step 3; if the current area does not satisfy the complex area determination condition, the sweeping robot may perform step 4.
And 3, outputting control request information to remind a user of making a user gesture.
In implementation, the sweeping robot can output the control request information in a plurality of ways, and in a feasible implementation manner, an audio playing part can be preset in the sweeping robot, and the sweeping robot can play the control request information through the audio playing part to remind a user of making a gesture.
In another possible implementation, the sweeping robot may remain connected to the user terminal of the user. The sweeping robot can send control request information to the user terminal so that the user terminal can display the control request information, and therefore the user is reminded of making user gestures.
And 4, not performing subsequent treatment.
In the embodiment of the application, the sweeping robot can acquire the image containing the current area, and determines whether the current area meets the complex area judgment condition or not by carrying out image recognition on the image. If the current area meets the complex area judgment condition, the sweeping robot can output control request information to remind a user of making a user gesture. If the current area does not satisfy the complex area determination condition, the sweeping robot may not perform subsequent processing. Therefore, the sweeping robot can realize the map construction of the complex area according to the guiding action of the user under the condition that the current area is the complex area, so that the map construction accuracy can be improved.
Optionally, an embodiment of the present application provides an implementation manner in which, based on an identification result obtained by performing image identification on an image including a current region, a sweeping robot determines whether the current region meets a complex region determination condition, where a specific determination process includes:
step one, carrying out image recognition on the image to obtain a recognition result of the current area.
The recognition result may be a mark indicating an area such as a kitchen, a living room, a bedroom, or a complex area, or may be a confirmation mark indicating that the area is a complex area, or a negative mark indicating that the area is not a complex area.
In implementation, the sweeping robot may perform image recognition on an image including the current area to obtain a recognition result of the current area.
And step two, if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
In the embodiment of the application, the sweeping robot can perform image recognition on the image containing the current area to obtain the recognition result of the current area. And if the identification result shows that the current area is the complex area, judging that the current area meets the complex area judgment condition. Therefore, the sweeping robot can accurately determine whether the current area is a complex area.
Optionally, the complex area contains a large number of objects or the objects are placed in a messy manner, so that image recognition of the complex area is difficult, and a long time is required for obtaining a recognition result. Therefore, the sweeping robot can determine whether the current area meets the complex area judgment condition or not based on the identification time required for carrying out image identification on the image containing the current area, and the specific determination process comprises the following steps:
and if the recognition time required for image recognition of the image exceeds a preset recognition time threshold, judging that the current area meets the complex area judgment condition.
The sweeping robot can store a preset identification time threshold, and the preset identification time threshold can be 1 s.
In implementation, in the process of performing image recognition on an image including a current region, the sweeping robot may determine that the current region satisfies the complex region determination condition when time consumed by the image recognition reaches a preset recognition time threshold.
If the recognition time required by the sweeping robot for image recognition of the current area reaches the preset recognition time threshold value, namely the recognition result is obtained, the sweeping robot can judge that the current area does not meet the complex area judgment condition. Or, the sweeping robot can judge whether the current area meets the complex area judgment condition according to the identification result of the current area.
In the embodiment of the application, the sweeping robot can judge that the current area meets the complex area judgment condition when the identification time required for image identification of the current area exceeds the preset identification time threshold. Therefore, the sweeping robot can conveniently and quickly determine the complex area, and the phenomenon that the sweeping robot consumes too long time in the process of recognizing and judging whether the current area is the complex area or not in the scene is avoided.
Optionally, the sweeping robot can be provided with a wide-angle camera in advance, and the sweeping robot can shoot a user through the wide-angle camera. In the related art, a monocular camera or a binocular camera required for map construction is arranged in the sweeping robot, and due to the limitation of the shooting angle of the monocular camera or the binocular camera, the sweeping robot can shoot user images containing users only when the user stands at a preset position. Compared with the prior art, the sweeping robot shoots the user through the wide-angle camera, the user does not need to stand at a preset position, and user experience can be improved.
Optionally, the sweeping robot may be communicatively connected to a user terminal used by the user prior to the mapping. The sweeping robot can shoot preset control gestures made by a user and store the preset control gestures. The user can set the moving operation corresponding to the preset control gesture through the user terminal, and the user terminal can send the moving operation corresponding to the preset control gesture to the sweeping robot. Then, the sweeping robot can correspondingly store the preset control gesture and the moving operation to obtain the corresponding relation between the preset control gesture and the moving operation.
Optionally, an image recognition model for recognizing the user gesture based on an image containing the user gesture may be preset in the sweeping robot, and the sweeping robot may train the image recognition model to improve the recognition accuracy of the user gesture. The specific training process can refer to the training process of training the deep learning network model based on the image in the related technology, and is not repeated in the application.
Based on the same technical concept, the embodiment of the present application further provides a sweeping robot, as shown in fig. 3, the sweeping robot includes a processing component (not shown in the figure), a camera component 1 and a map building component (not shown in the figure), wherein,
the map construction component is used for carrying out map construction processing on the current area;
the camera shooting component 1 is used for shooting a user to obtain a user image comprising user gestures;
the processing component is used for detecting the user gesture based on the user image in the process of map construction; if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation; and executing the first moving operation so that the map building component carries out map building processing on the current area.
Optionally, the camera component is a wide-angle camera.
The sweeping robot can be provided with a wide-angle camera in advance, and the sweeping robot can shoot a user through the wide-angle camera. In the related art, a monocular camera or a binocular camera required for map construction is arranged in the sweeping robot, and due to the limitation of the shooting angle of the monocular camera or the binocular camera, the sweeping robot can shoot user images containing users only when the user stands at a preset position. Compared with the prior art, the sweeping robot shoots the user through the wide-angle camera, the user does not need to stand at a preset position, and user experience can be improved.
Optionally, the processing unit is further configured to determine, when a voice instruction is received, a second moving operation indicated by the voice instruction; and executing the second moving operation to map the current area.
Optionally, the processing unit is further configured to acquire an image including a current region; determining whether the current region meets a complex region judgment condition by performing image recognition on the image; and if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture.
Optionally, the processing component is specifically configured to perform image recognition on the image to obtain a recognition result of the current region; and if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
Optionally, the processing unit is specifically configured to determine that the current region satisfies the complex region determination condition when the recognition time required for performing image recognition on the image exceeds a preset recognition time threshold.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a sweeping robot, which detects a user gesture in a map construction process; if the user gesture is a preset control gesture, determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the moving operation; a first move operation is performed to map the current area. Because the first moving operation corresponding to the preset control gesture can be executed under the condition that the user gesture is the preset control gesture, the map construction processing can be carried out on the current area, and the map construction accuracy can be improved.
Based on the same technical concept, an embodiment of the present application further provides a control device for a sweeping robot, as shown in fig. 4, the control device includes:
a detection module 410, configured to detect a user gesture during a process of map construction;
a first determining module 420, configured to determine, when the user gesture is a preset control gesture, a first moving operation corresponding to the user gesture according to a pre-stored correspondence between a preset control gesture and a moving operation;
an executing module 430, configured to execute the first moving operation to perform mapping processing on the current area.
Optionally, the apparatus further comprises:
the second determination module is used for determining a second movement operation indicated by the voice instruction when the voice instruction is received;
the execution module is further configured to execute the second moving operation to perform mapping processing on the current area.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring an image containing a current area;
a third determining module, configured to determine whether the current region meets a complex region determination condition by performing image recognition on the image;
and the output module is used for outputting control request information when the current area meets the complex area judgment condition so as to remind a user of making a user gesture.
Optionally, the third determining module includes:
the image recognition submodule is used for carrying out image recognition on the image to obtain a recognition result of the current area;
and the judging submodule is used for judging that the current area meets the complex area judging condition when the identification result shows that the current area is the complex area.
Optionally, the third determining module is specifically configured to determine that the current region meets the complex region determination condition when the recognition time required for performing image recognition on the image exceeds a preset recognition time threshold.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a control device of a sweeping robot, which detects a user gesture in the process of map construction; if the user gesture is a preset control gesture, determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the moving operation; a first move operation is performed to map the current area. Because the first moving operation corresponding to the preset control gesture can be executed under the condition that the user gesture is the preset control gesture, the map construction processing can be carried out on the current area, and the map construction accuracy can be improved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 5, including a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
detecting a user gesture in the process of constructing the map;
if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation;
and executing the first moving operation to map the current area.
Optionally, after the executing the first moving operation to perform the mapping process on the current area, the method further includes:
if a voice instruction is received, determining a second moving operation indicated by the voice instruction;
and executing the second moving operation to map the current area.
Optionally, before detecting the user gesture, the method further includes:
acquiring an image containing a current area;
determining whether the current region meets a complex region judgment condition by performing image recognition on the image;
and if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture.
Optionally, the determining whether the current region meets the complex region determination condition by performing image recognition on the image includes:
carrying out image recognition on the image to obtain a recognition result of the current area;
and if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
Optionally, the determining whether the current region meets the complex region determination condition by performing image recognition on the image includes:
and if the recognition time required for image recognition of the image exceeds a preset recognition time threshold, judging that the current region meets the complex region judgment condition.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides electronic equipment, which detects user gestures in the process of map construction; if the user gesture is a preset control gesture, determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the moving operation; a first move operation is performed to map the current area. Because the first moving operation corresponding to the preset control gesture can be executed under the condition that the user gesture is the preset control gesture, the map construction processing can be carried out on the current area, and the map construction accuracy can be improved.
In another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the above-mentioned control methods for a cleaning robot.
In another embodiment provided by the present application, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute the control method of any one of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A control method of a sweeping robot is characterized by comprising the following steps:
in the process of constructing the map, acquiring an image containing a current area;
determining whether the current region meets a complex region judgment condition by performing image recognition on the image;
if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture;
detecting a user gesture;
if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation;
and executing the first moving operation to map the current area.
2. The method of claim 1, wherein after performing the first move operation to map the current area, further comprising:
if a voice instruction is received, determining a second moving operation indicated by the voice instruction;
and executing the second moving operation to map the current area.
3. The method according to claim 1, wherein the determining whether the current region satisfies a complex region determination condition by performing image recognition on the image comprises:
carrying out image recognition on the image to obtain a recognition result of the current area;
and if the identification result shows that the current area is a complex area, judging that the current area meets the complex area judgment condition.
4. The method according to claim 3, wherein the determining whether the current region satisfies a complex region determination condition by performing image recognition on the image comprises:
and if the recognition time required for image recognition of the image exceeds a preset recognition time threshold, judging that the current region meets the complex region judgment condition.
5. A sweeping robot is characterized by comprising a processing component, a camera component and a map building component, wherein,
the map construction component is used for carrying out map construction processing on the current area;
the processing component is used for acquiring an image containing a current area; determining whether the current region meets a complex region judgment condition by performing image recognition on the image; if the current area meets the complex area judgment condition, outputting control request information to remind a user of making a user gesture; the camera shooting component is used for shooting a user to obtain a user image comprising a user gesture;
the processing component is further configured to detect the user gesture based on the user image in a process of performing mapping; if the user gesture is a preset control gesture, determining a first movement operation corresponding to the user gesture according to a pre-stored corresponding relation between the preset control gesture and the movement operation; and executing the first moving operation so that the map building component carries out map building processing on the current area.
6. The sweeping robot of claim 5, wherein the camera assembly is a wide-angle camera.
7. A control device of a floor sweeping robot, characterized in that the device comprises:
the acquisition module is used for acquiring an image containing a current area;
a third determining module, configured to determine whether the current region meets a complex region determination condition by performing image recognition on the image;
the output module is used for outputting control request information to remind a user of making a user gesture when the current area meets the complex area judgment condition;
the detection module is used for detecting user gestures in the process of constructing the map;
the first determining module is used for determining a first moving operation corresponding to the user gesture according to a pre-stored corresponding relation between a preset control gesture and the moving operation when the user gesture is the preset control gesture;
and the execution module is used for executing the first moving operation so as to carry out map construction processing on the current area.
8. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 4 when executing a program stored in the memory.
9. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202010214459.7A 2020-03-24 2020-03-24 Control method and device of sweeping robot, sweeping robot and electronic equipment Active CN111475018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010214459.7A CN111475018B (en) 2020-03-24 2020-03-24 Control method and device of sweeping robot, sweeping robot and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010214459.7A CN111475018B (en) 2020-03-24 2020-03-24 Control method and device of sweeping robot, sweeping robot and electronic equipment

Publications (2)

Publication Number Publication Date
CN111475018A CN111475018A (en) 2020-07-31
CN111475018B true CN111475018B (en) 2021-08-17

Family

ID=71748372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010214459.7A Active CN111475018B (en) 2020-03-24 2020-03-24 Control method and device of sweeping robot, sweeping robot and electronic equipment

Country Status (1)

Country Link
CN (1) CN111475018B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105395144A (en) * 2015-12-21 2016-03-16 美的集团股份有限公司 Control method, system and cloud server of sweeping robot and sweeping robot
CN105739500A (en) * 2016-03-29 2016-07-06 海尔优家智能科技(北京)有限公司 Interaction control method and device of intelligent sweeping robot
CN110338708A (en) * 2019-06-21 2019-10-18 华为技术有限公司 A kind of the cleaning control method and equipment of sweeping robot
CN110353573A (en) * 2019-06-05 2019-10-22 深圳市杉川机器人有限公司 The method of getting rid of poverty of sweeping robot, calculates equipment and storage medium at sweeping robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102124509B1 (en) * 2013-06-13 2020-06-19 삼성전자주식회사 Cleaning robot and method for controlling the same
KR20160065574A (en) * 2014-12-01 2016-06-09 엘지전자 주식회사 Robot cleaner and method for controlling the same
US10335949B2 (en) * 2016-01-20 2019-07-02 Yujin Robot Co., Ltd. System for operating mobile robot based on complex map information and operating method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105395144A (en) * 2015-12-21 2016-03-16 美的集团股份有限公司 Control method, system and cloud server of sweeping robot and sweeping robot
CN105739500A (en) * 2016-03-29 2016-07-06 海尔优家智能科技(北京)有限公司 Interaction control method and device of intelligent sweeping robot
CN110353573A (en) * 2019-06-05 2019-10-22 深圳市杉川机器人有限公司 The method of getting rid of poverty of sweeping robot, calculates equipment and storage medium at sweeping robot
CN110338708A (en) * 2019-06-21 2019-10-18 华为技术有限公司 A kind of the cleaning control method and equipment of sweeping robot

Also Published As

Publication number Publication date
CN111475018A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
JP2021509215A (en) Navigation methods, devices, devices, and storage media based on ground texture images
WO2021174889A1 (en) Method and apparatus for recognizing target region, terminal, and computer readable medium
US20200218424A1 (en) Touch detection method and computer-readable storage medium
TWI431538B (en) Image based motion gesture recognition method and system thereof
CN107607542A (en) notebook appearance quality detection method and device
CN114416244B (en) Information display method and device, electronic equipment and storage medium
WO2022267795A1 (en) Regional map processing method and apparatus, storage medium, and electronic device
CN111126268B (en) Key point detection model training method and device, electronic equipment and storage medium
CN109118811A (en) Method, equipment and the computer readable storage medium of vehicle positioning stop position
TWI706309B (en) Method and device for mouse pointer to automatically follow cursor
CN112790669A (en) Sweeping method and device of sweeper and storage medium
CN111678522A (en) Cleaning method and device for target object, readable medium and electronic equipment
CN113741446B (en) Robot autonomous exploration method, terminal equipment and storage medium
CN110084187B (en) Position identification method, device, equipment and storage medium based on computer vision
CN110858814B (en) Control method and device for intelligent household equipment
CN106569716B (en) Single-hand control method and control system
CN111475018B (en) Control method and device of sweeping robot, sweeping robot and electronic equipment
WO2022028110A1 (en) Map creation method and apparatus for self-moving device, and device and storage medium
CN110181504B (en) Method and device for controlling mechanical arm to move and control equipment
CN110147198A (en) A kind of gesture identification method, gesture identifying device and vehicle
CN104063041A (en) Information processing method and electronic equipment
CN114074321A (en) Robot calibration method and device
CN110765926A (en) Drawing book identification method and device, electronic equipment and storage medium
CN111160174A (en) Network training method, locomotive orientation identification method and device and terminal equipment
CN115290066A (en) Error correction method and device and mobile equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant