CN110580426A - human-computer interaction method of robot and robot - Google Patents

human-computer interaction method of robot and robot Download PDF

Info

Publication number
CN110580426A
CN110580426A CN201810583982.XA CN201810583982A CN110580426A CN 110580426 A CN110580426 A CN 110580426A CN 201810583982 A CN201810583982 A CN 201810583982A CN 110580426 A CN110580426 A CN 110580426A
Authority
CN
China
Prior art keywords
user
projection
unit
robot
projection pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810583982.XA
Other languages
Chinese (zh)
Inventor
刘章林
张一茗
陈震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quick Sense Technology (beijing) Co Ltd
Qfeeltech Beijing Co Ltd
Original Assignee
Quick Sense Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quick Sense Technology (beijing) Co Ltd filed Critical Quick Sense Technology (beijing) Co Ltd
Priority to CN201810583982.XA priority Critical patent/CN110580426A/en
Publication of CN110580426A publication Critical patent/CN110580426A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a human-computer interaction method of a robot and the robot, wherein visual information comprising operable items is acquired; controlling a projection unit to project the visual information into a projection area to form a projection pattern; acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit; and identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction. The method can provide a more convenient human-computer interaction method for the user, the user can input instructions to the robot in a silent mode through the means of limbs or tools for prolonging the limb control distance and the like, the variety of the human-computer interaction modes of the robot is increased, the human-computer interaction is more flexible, the operation of a deaf-mute is facilitated, the remote control is not limited by signal transmission, the sound control is not influenced by environmental noise, and the accuracy is higher.

Description

human-computer interaction method of robot and robot
Technical Field
the invention relates to the technical field of communication, in particular to a human-computer interaction method of a robot and the robot.
background
At present, various types of autonomous robots have increasingly entered into daily lives of people, such as an intelligent floor sweeper, an intelligent floor mopping machine, a window cleaning robot for performing cleaning tasks, a caregiver robot for performing accompanying tasks, a child playing robot, a child education robot, a guard robot for performing safety tasks such as fire prevention and theft prevention, and the like. For various types of autonomous robots, the interaction mode of the robot and a human user and the interaction effect determined by the interaction mode are important, because the interaction mode influences the use experience of the user on the reliability, intelligence and convenience of the robot.
The currently common man-machine interaction modes mainly include: the instructions are sent to the robot user through a remote controller, a mobile phone or a touch screen on the robot body, voice and the like, the display result is output through a display screen on the robot body or a wireless terminal (such as a mobile phone, a tablet personal computer and the like), or voice feedback information is sent through a loudspeaker, or the instructions are directly executed without feedback. The human-computer interaction modes have certain problems, for example, remote control through a remote controller, a mobile phone and the like is limited by whether a network exists or not, the network speed is high or low, the distance between the robot and a user and the position of the robot exist, and the control effect is not ideal; the influence of other noise (such as song lyrics being played, dialogue in a broadcast or television, and the like) may be received through voice control or robot voice feedback, which may cause mishearing and misjudgment of the robot; the noise generated by the robot itself (such as the robot performing the cleaning task) also affects the accuracy of the robot's pick-up of the user's voice commands.
Disclosure of Invention
The invention provides a human-computer interaction method of a robot and the robot, and provides a more convenient human-computer interaction method for a user.
one aspect of the present invention provides a human-computer interaction method for a robot, including:
Acquiring visual information containing operable items;
controlling a projection unit to project the visual information into a projection area to form a projection pattern;
Acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit;
And identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction.
Further, before controlling the projection unit to project the visual information into the projection area to form the projection pattern, the method may further include:
and acquiring an area capable of bearing the projection pattern in a first preset range as a projection area.
Further, the region capable of carrying the projection pattern may include an object surface capable of forming the projection pattern by diffuse reflection or an object surface capable of forming the projection pattern by transmission;
further, the acquiring, as a projection area, an area capable of bearing the projection pattern within a first predetermined range may specifically include:
and acquiring a continuous and regular plane capable of bearing the projection pattern in a first preset range as the projection area according to the surrounding environment image acquired by the shooting unit and/or a pre-stored environment layout.
Further, the controlling the projection unit to project the visual information into a projection area to form a projection pattern may specifically include: and controlling the projection unit to carry out scaling and/or definition adjustment on the projection pattern according to the surrounding environment image acquired by the shooting unit, and/or the size of the projection area acquired by the distance measuring equipment and the distance from the projection unit to the projection area, so as to form a clear projection pattern in the projection area.
Further, the acquiring the operation image of the user on the projection pattern, acquired by the shooting unit, may include:
acquiring a click operation image and/or a sliding operation image of the user limb, the shadow of the user limb, the tool for prolonging the limb control distance and/or the shadow of the tool for prolonging the limb control distance, which are acquired by a shooting unit, on the projection pattern;
Further, the identifying, according to the operation image, the operation instruction of the user on the operable item may include:
And identifying the operable item clicked by the user according to the click operation image or identifying the sliding track of the user on the operable item according to the sliding operation image so as to obtain the corresponding operation instruction of the user on the operable item.
further, before controlling the projection unit to project the visual information into the projection area to form the projection pattern, the method may further include:
Judging whether the visual information is in urgent need of user operation;
if yes, obtaining the position of the user, and controlling the robot to move to an area near the position of the user according to the position of the user;
if not, judging whether the user position is in a second preset range; if the user position is not within the second preset range, continuing to execute the current task; and if the user position is within the second preset range, projecting the visual information into a projection area to form a projection pattern.
further, the user search unit may include: passive thermal infrared devices and/or sound pickup devices.
Further, before the acquiring the visual information containing the operable items, the method may further include:
receiving a user calling instruction, acquiring a user position, and controlling the robot to move to an area near the user position according to the user position.
Another aspect of the present invention provides a robot comprising: the projection unit, the shooting unit, the control unit and the execution unit are respectively and electrically connected with the control unit;
the control unit can be used for acquiring visual information containing operable items and sending the visual information to the projection unit;
the projection unit may be configured to project the visual information into a projection area to form a projection pattern;
the shooting unit can be used for collecting an operation image of the user on the projection pattern and sending the operation image to the control unit;
the control unit can also be used for identifying an operation instruction of a user on an operable item according to the operation image and controlling the corresponding execution unit to execute the operation instruction.
Further, the control unit may be further configured to:
Acquiring an area capable of bearing the projection pattern in a first preset range as a projection area, wherein the area capable of bearing the projection pattern comprises an object surface capable of forming the projection pattern through diffuse reflection or an object surface capable of forming the projection pattern through transmission;
furthermore, the shooting unit can be used for collecting images of the surrounding environment and sending the images to the control unit;
Further, the control unit may be further configured to obtain, as the projection area, a continuous and regular plane capable of bearing the projection pattern within a first predetermined range according to the environment image and/or a pre-stored environment layout.
Further, the robot may further include:
the distance measuring equipment can be used for acquiring the size of the projection area and the distance from the projection unit to the projection area and sending the size and the distance to the control unit;
further, the control unit may be further configured to send an adjustment instruction to the projection unit according to the environment image, and/or the size of the projection area and the distance from the projection unit to the projection area;
further, the projection unit may be further configured to perform scaling and/or definition adjustment on the projection pattern according to the adjustment instruction, so as to form a clear projection pattern in the projection area;
Further, the shooting unit may be specifically configured to obtain a click operation image and/or a sliding operation image of the user limb, the shadow of the user limb, the tool for extending the limb control distance, and/or the shadow of the tool for extending the limb control distance, on the projection pattern, which are acquired by the shooting unit, and send the click operation image and/or the sliding operation image to the control unit;
Further, the control unit may be specifically configured to identify an operable item clicked by a user according to the click operation image, or identify a sliding track of the user on the operable item according to the sliding operation image, so as to obtain an operation instruction of the corresponding user on the operable item.
further, the robot may further include:
a movement unit for moving a position of the robot;
Further, the robot may further include: the user searching unit is used for acquiring the user position by the searching unit and sending the user position to the control unit;
further, the control unit may be further configured to determine whether the visual information urgently requires a user operation; if so, acquiring the position of the user through a user searching unit, and controlling the robot to move to an area near the position of the user through the motion unit according to the position of the user; if not, judging whether the user position is in a second preset range through the user searching unit; if the user position is not within the second preset range, continuing to execute the current task; and if the user position is within the second preset range, controlling the projection unit to project the visual information into a projection area to form a projection pattern.
Further, the user search unit may include: passive thermal infrared devices and/or sound pickup devices.
Further, the control unit may be further configured to receive a user call instruction, acquire the user position through the user search unit, and move to an area near the user position through the motion unit according to the user position.
According to the human-computer interaction method of the robot and the robot, visual information comprising the operable items is obtained; controlling a projection unit to project the visual information into a projection area to form a projection pattern; acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit; and identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction. The method can provide a more convenient human-computer interaction method for the user, the user can input instructions to the robot in a silent mode through the means of limbs or tools for prolonging the limb control distance and the like, the variety of the human-computer interaction modes of the robot is increased, the human-computer interaction is more flexible, the operation of a deaf-mute is facilitated, the remote control is not limited by signal transmission or network speed, the influence of environmental noise on the sound control is avoided, and the accuracy is higher.
Drawings
in order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
fig. 1 is a flowchart of a human-computer interaction method of a robot according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of the method of the embodiment shown in FIG. 1;
fig. 3 is a flowchart of a human-computer interaction method of a robot according to another embodiment of the present invention;
fig. 4 is a flowchart of a human-computer interaction method of a robot according to another embodiment of the present invention;
FIG. 5 is a block diagram of a robot provided in an embodiment of the present invention;
Fig. 6 is a structural diagram of a robot according to another embodiment of the present invention.
Detailed Description
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
fig. 1 is a flowchart of a human-computer interaction method of a robot according to an embodiment of the present invention. The embodiment provides a human-computer interaction method for a robot, where the robot may be an autonomous robot of various types, such as an intelligent sweeper, a babysitter robot, a guard robot, and the like, and the robot may specifically be as shown in fig. 3, including: the robot human-computer interaction method comprises a projection unit, a shooting unit, a control unit and an execution unit, wherein the projection unit, the shooting unit and the execution unit are respectively and electrically connected with the control unit, the execution main body of the robot human-computer interaction method is the control unit, and the method specifically comprises the following steps:
s100, acquiring visual information containing operable items.
In this embodiment, the visual information may include selectable options, such as "a. silent mode", b. full mode ", and c. repeat cleaning mode", or may include both the problem to be determined and selectable options, such as "which room to clean?", where the selectable options are "a. bedroom, b. living room, c. kitchen, d. balcony", or the operable items may be non-options, such as patterns and/or characters for the user to fill in or modify, or areas for the user to sign or draw, etc.
And S200, controlling a projection unit to project the visual information into a projection area to form a projection pattern.
In this embodiment, after the control unit obtains the visual information containing the operable item, the control unit may control the projection unit to project the visual information into a projection pattern in a projection area, where the projection area may be a fixed area, such as a projection curtain or a wall surface at a fixed position, or an area around the robot that can bear the projection pattern, such as a surface of an object, such as a floor, a table, or the like, the projection area is not limited to a plane, but may also be a curved surface (such as a spherical surface of an exercise ball), or a combination of multiple surfaces (such as a composite surface composed of a floor, a wall surface, and a boundary thereof), or may also be a surface of a translucent object, such as a projection pattern projected from a table of frosted glass to a lower surface of frosted glass, and a user may view the projection pattern on an upper surface of frosted glass. But not limited to, the surface of a translucent object, but also the surface of an opaque object, for example, when the visual information is projected onto an opaque screen, a certain projection pattern can be transmitted on the other side of the screen.
in the embodiment, different projection patterns can be adopted according to the visual information and the types of the projection units. For example, if the projection unit is a plurality of LED lamps, the projection pattern may be LED spots with different colors representing different options (for example, a red spot represents "no" and a green spot represents "yes", or a yellow spot represents "kitchen", a white spot represents "living room", and a purple spot represents "balcony", etc.), and then the spots or shapes with different colors projected by the LED lamps are the projection pattern in this embodiment. If the projection unit is a projector, patterns or characters representing different options can be formed in the projection area, and the patterns or characters or the combination of the patterns and the characters representing different options all belong to the projection patterns in the invention. If the projection unit is a holographic projector, the projection area may be a 3D space in which a holographic projection is presented, in which case the projection pattern may be a holographic projection representing different options, which may be a 3D holographic image or text, or a combination of holographic images and text. The projector does not need to be a complete projector in the market, and can also be a core component in the projector, such as a projector bulb, a filter and/or a color filter. Furthermore, the projected pattern may be images and/or characters that are displayed sequentially or scrolled at a certain frequency, such as "bedroom", "living room", "kitchen", "balcony", and the like.
And S300, acquiring an operation image of the user on the projection pattern, which is acquired by the shooting unit.
In this embodiment, after the projection unit projects the visual information onto the projection area to form the projection pattern, the user may operate on the projection pattern through a limb (e.g. a body part such as a hand, a foot, an elbow, etc.) or a tool for extending a control distance of the limb (e.g. a crutch, a laser pen, or an article dropped onto the projection pattern), for example, the user may click or slide on the projection pattern with a hand or a foot, or the limb of the user may contact with the surface of the projection area of the projection pattern, or the limb of the user may sweep or hang over the projection pattern for a period of time, or the foot of the user may sweep over the projection pattern, or the foot of the user may hang over the projection path of the projection unit, so that the shadow of the foot overlaps or intersects with the projection pattern of the projection area (e.g. on the ground). In addition, according to different projection areas, the user can adopt different operation modes, for example, the projection area is a desktop or a wall surface, and the user can click or slide on the projection pattern by using fingers; the projection area is the ground, and the user can step on or click or sweep the corresponding operable item of the projection image by using feet, toes or footwear worn on the feet; when the projection area is a curved surface or a combination of multiple flat surfaces, for example, the projection area is a combination of a ground portion and a wall portion, the user can manipulate the projected pattern with either fingers or footwear worn on the foot, toes or foot. The click may comprise a single click, a double click, etc., and the swipe or sweep may comprise a swipe or sweep motion in an arbitrary trajectory. In addition, the user may perform other natural interactive actions on the projection image, such as drawing or writing in a blank frame of the projection pattern, touching an "x" symbol in the projection pattern indicating "delete" with an appendage to delete the corresponding content, or selecting an area in the projection pattern with an appendage, and so on. Based on the operation of the user limb on the projection pattern, the shooting unit can collect the corresponding operation image, the operation of the user on the operable item in the projection pattern is solidified in the form of the image, and particularly, a plurality of continuous operation actions of the user limb on the projection pattern can be continuously collected to be recorded. In other embodiments of the present invention, the user may perform the above operations using a tool for extending the limb control distance, for example, the user uses a crutch, an artificial limb, a laser pointer, a throwing object (such as a small ball, a shoe, a sock, a key, etc.) to perform the above natural interaction actions of clicking, sliding, sweeping, etc., and the user may also make the tool for extending the limb control distance, such as the crutch, the artificial limb, etc., virtually hang on the projection light path of the projection unit, so that the shadow of the tool for extending the limb control distance, such as the crutch, the artificial limb, etc., overlaps or intersects with the projection pattern of the projection area (such as on the ground). Therefore, in this embodiment, the user operates the projection pattern, that is, operates the projection pattern by means of the limbs of the user and/or a tool for extending the limb control distance, and the operation image of the projection pattern acquired by the shooting unit is an image which is obtained by shooting by the shooting unit and contains the operation instruction of the user on the operable item.
Fig. 2 is a schematic diagram illustrating that a projection pattern is formed by projecting visual information onto a projection area of the ground, and the robot 50 projects the visual information onto the projection area through the projection unit 501 to form the projection pattern 600, where the projection pattern 600 includes a plurality of operable items, a user may click one of the operable items by contacting the surface of the projection pattern 600 with a foot, and a control unit (not shown) acquires an operation image of the projection pattern 600 acquired by the shooting unit 502, and then executes a subsequent operation instruction of the user for the operable item according to the operation image, and controls a corresponding execution unit to execute the operation instruction.
The shooting unit can be a fisheye monocular camera or a wide-angle monocular camera, a binocular camera, a multi-view camera or a depth camera and other shooting modules, and is not limited here. The shooting unit can only comprise one group of shooting modules and is only used for collecting the operation image of the user on the projection pattern, and the shooting unit is not used in the working process (such as a simple fixed sound input type accompanying robot or an educational robot, the surrounding environment image or the facial information of the user is not required to be collected through the shooting unit, and only the operation image of the user on the projection pattern is collected); the shooting module can also be used for collecting images of the environment around the robot (such as SLAM real-time mapping positioning, navigation, face recognition and/or object recognition) and collecting operation images of the user on the projection pattern. In addition, the shooting unit is not limited to include only one set of shooting module, for example, two sets of shooting modules may be included, where a first shooting module may be used to collect an operation image of the user on the projection pattern, and a second shooting module may be used to collect an image of the surrounding environment (for example, a cleaning robot needs to perform an overlay cleaning task in a room, and a second shooting module is needed to collect an image of the surrounding environment to assist the movement unit in positioning and navigating). In another optional embodiment, the two groups of shooting modules have the functions of collecting the operation images and the environment images and are mutually redundant backups of the other shooting module, when a certain group of shooting modules breaks down, the other group of normal shooting modules can replace the broken-down shooting module, and the stability and the continuity of the operation of the robot are guaranteed. Those skilled in the art can easily conceive of other types of photographing modules or combinations of a plurality of similar/dissimilar photographing modules as the photographing unit of the present invention according to the above embodiments, and thus various photographing modules or combinations of a plurality of similar/dissimilar photographing modules also belong to specific forms of the photographing unit of the present invention and are within the scope of the present invention.
s400, identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction.
In this embodiment, the control unit identifies an operation instruction of the user on the operable item according to the operation image acquired by the shooting unit, and specifically, may identify the operable item clicked by the user according to the click operation image or identify a sliding track of the user on the operable item according to the sliding operation image, so as to obtain the operation instruction of the user on the corresponding operable item. After the operation instruction is recognized, the corresponding execution unit is controlled to execute the operation instruction, for example, the operation instruction of the user on the operable item is recognized as an instruction to move to a position meeting a condition to execute a specific task, such as cleaning a certain area, finding a certain family member, monitoring doors and windows, and the like, and the execution unit may correspondingly include a motion unit and a cleaning unit, a user finding unit, a shooting unit, or the like. That is, according to the type of the robot and the difference of the executed tasks, the content of the operable items is different, the problem to be determined and the corresponding selectable options are different, that is, the operation instruction of the user is different, the execution unit may be different or may coincide with the projection unit, the shooting unit, the motion unit, and the like. Taking an autonomous robot of a cleaning class as an example, such as an intelligent floor sweeper, an intelligent floor mopping machine, and a window wiping robot, the operation instruction of the user may be a cleaning mode (such as "silent mode", "full mode", "repeat cleaning mode", etc.); or to a designated cleaning area (such as "bedroom", "living room", "kitchen", "balcony", etc.), so the component performing the operation instruction of the user may be a special execution unit, such as a main brush and a side brush of the intelligent sweeper, or a motion unit (the executed operation instruction is "to reach a certain area or place"), such as a chassis and wheel assemblies thereof, a crawler, etc. of the intelligent sweeper. And if the operation instruction of the user is to play music, the corresponding execution unit is a music player, and the music is played through the music player. If the robot is a nurse robot and the user's operation command is ' let xiao ming eat ', the corresponding execution unit comprises a user search unit, a motion unit, a loudspeaker and the like, the user search unit (such as a passive thermal infrared device, a face recognition device and the like) is used for searching the position of the ' xiao ming ' of the family member, the user search unit is moved to the vicinity of the ' xiao ming ' through the motion unit, and the loudspeaker is used for sending out voice information of ' xiao ming, eating '. And if the operation instruction of the user is to play the network video, the corresponding execution unit comprises a communication unit and a projection unit, the communication unit is connected with the terminal and/or the remote server and communicates with the terminal and/or the remote server to acquire the network video, and the projection unit is used for projecting and playing the network video. Further implementations are not described in detail herein.
According to the man-machine interaction method of the robot, visual information containing operable items is acquired; controlling a projection unit to project the visual information into a projection area to form a projection pattern; acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit; and identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction. The method of the embodiment can provide a more convenient human-computer interaction method for a user, the user inputs instructions to the robot in a silent mode through the means of limbs or a tool for prolonging the limb control distance and the like, the variety of the human-computer interaction modes of the robot is increased, the human-computer interaction is more flexible, the operation of a deaf-mute is facilitated, the remote control is not limited by signal transmission or network speed, the influence of environmental noise on the sound control is avoided, and the accuracy is higher.
Fig. 3 is a flowchart of a human-computer interaction method of a robot according to another embodiment of the present invention. On the basis of the foregoing embodiment, before controlling the projection unit to project the visual information into the projection area to form the projection pattern in S200, the method may further include:
acquiring an area capable of bearing the projection pattern in a first predetermined range as a projection area, wherein the area capable of bearing the projection pattern comprises the surface of an object capable of forming the projection pattern through diffuse reflection or the surface of a semitransparent object capable of forming the projection pattern through transmission.
In this embodiment, the control unit may obtain, as a projection area, an area capable of bearing the projection pattern within a first predetermined range, where the area capable of bearing the projection pattern includes an object surface capable of forming the projection pattern by diffuse reflection, such as a desktop, a wall surface, or a ground surface, and the projection unit may directly project the projection pattern on the object surface; the area capable of bearing the projection pattern may also include a surface of a translucent object such as frosted glass, for example, a cleaning robot projects the projection pattern from the underside of a table of frosted glass to the underside of frosted glass, light passes through the frosted glass to form the projection pattern on the upper surface of the frosted glass, a user operates the projection pattern on the upper surface of the frosted glass, and a shooting module of the cleaning robot collects an operation image from the underside of the frosted glass.
Further, as shown in fig. 3, the acquiring, as a projection area, an area capable of bearing the projection pattern within a first predetermined range may specifically include:
s150, acquiring a continuous and regular plane capable of bearing a projection pattern in a first preset range as the projection area according to the surrounding environment image acquired by the shooting unit and/or a pre-stored environment layout.
In this embodiment, the robot preferentially selects a continuous regular plane as the projection area. By "continuous" is meant that the surface of the robot preferably has no sharp interface formed by two planes of different dimensions (such as a right-angle interface commonly formed between a wall and a floor) and no interface formed by planes of different heights (such as a step interface between a floor and a carpet or a raised or recessed interface between different rooms) as the projection area, and the sharp interface formed by such planes of different dimensions or the interface of the steps of planes of different heights can cause creases or distortions in the projection area, which can reduce the user experience, but such cases should not be interpreted absolutely in the above-mentioned text, such as two planes of different dimensions connected by a curved surface or a sloping surface with a gentle transition, even if the dimensions of the two planes themselves are perpendicular, but because of the sharp transition at the interface, no obvious creases or interfaces between the planes of different dimensions can cause image distortions, and therefore should also be considered "continuous". The "regular" means that the robot prefers a rectangular or circular or elliptical regular area as the projection area because the projection area can display the projection pattern with the maximum area of such a regular area. The robot preferably uses a plane surface rather than a curved surface or a special-shaped surface as a projection area, so that distortion of a projection pattern can be avoided. In this embodiment, a continuous and regular plane capable of bearing the projection pattern in a first predetermined range around the robot may be acquired as the projection area through the surrounding environment image acquired by the photographing unit and/or the pre-stored environment layout. The environment layout can include a plane layout and a 3D layout of a room, and can also include information such as the furnishing positions of various furniture, and the control unit can identify which areas can be used as projection areas from the environment layout.
further, the controlling the projection unit in step S200 to project the visual information into a projection area to form a projection pattern may further include:
S210, controlling the projection unit to zoom and/or adjust definition of the projection pattern according to the surrounding image collected by the shooting unit, and/or the size of the projection area collected by the distance measuring equipment and the distance from the projection unit to the projection area, so as to form a clear projection pattern in the projection area.
in this embodiment, the projection pattern may be zoomed according to the size of the projection area, and the focus may be adjusted according to the distance from the projection unit to the projection area, and the projection brightness may be adjusted according to the brightness of the surface of the projection area, so that a clear projection pattern is formed in the projection area, and the user may clearly see the projection pattern without glaring or blurring. An image containing sharp boundary features between different dimensions or step boundary features formed by planes of different heights can be shot by using a shooting unit, the boundary features are extracted by a control unit to identify the boundary, and the actual size of the continuous regular plane is calculated, wherein the actual size comprises the lengths of the two dimensions of the continuous regular plane and the position of the continuous regular plane. For a rectangle, the actual dimensions include the center position and/or end point positions of successive regular planes, and the length, width, and/or diagonal; for a circle, the actual size includes the circle center position and/or diameter of a continuous regular plane; for an ellipse, the actual dimensions include the center position, the major axis, the minor axis, and/or the ratio of the major and minor axes of the continuous regular plane. The control unit then appropriately scales the visual information according to the actual size and projects the scaled visual information into the continuous regular plane, so that the size of the projected pattern conforms to the actual size as much as possible, the actual available area of the continuous regular plane is utilized as much as possible, distortion is avoided, and enough operation space can be provided for a user. Of course, the size of the projection area and the distance from the projection unit to the projection area may also be obtained by using a distance measuring device such as a TOF (Time of flight) distance measuring device, a laser radar, and the like, and then the projection pattern is scaled and the definition of the projection pattern is adjusted, which is not described herein again.
Steps S300 and S400 in this embodiment are similar to the above embodiments except for performing adaptive adjustment according to the specific situation of this embodiment, and are not described again here.
Further, as shown in fig. 4, in another embodiment, before the controlling the projection unit to project the visual information into the projection area to form the projection pattern in step S200, the method may further include:
S110: judging whether the visual information is in urgent need of user operation; if yes, go to step S120; if not, go to step S130;
s120: acquiring a user position (such as by a user searching unit), and controlling the robot to move (such as by moving units such as wheels, tracks and the like) to an area near the user position according to the user position; then, step S200 (or the step S210 of refining step S200) is performed, or in other embodiments, step S150 may be performed first, and then step S200 may be performed, and finally the visible information is projected into the projection area to form the projection pattern;
s130: it is determined whether the user location is within a second predetermined range (e.g., by the user seeking unit). If the user position is determined to be within the second predetermined range, performing step S200 (or the step S210 of refining step S200) or in other embodiments, performing step S150 and then performing step S200, and finally projecting the visual information into the projection area to form a projection pattern; if the user position is determined to be beyond the second predetermined range, executing step S140;
S140: the current task continues to be executed until the user is determined to be within the second predetermined range by the user search unit, and then step S200 is performed (or the step S210 of refining step S200 is performed) or in other embodiments, step S150 may be performed first and then step S200 may be performed, and finally the visual information is projected into the projection area to form the projection pattern.
in this embodiment, different operations may be performed according to whether the visual information determined by the user is urgent or not. For example, for visual information that does not require immediate determination by the user, for example, if the preset temperature condition that the air conditioner should be turned on is set to "current temperature > 32 ℃", and the temperature condition that the air conditioner can be turned on and can not be turned on is set to "28 ≦ current temperature ≦ 32 ≦ then, when the room temperature of 30 ℃ is detected," whether to turn on the air conditioner "is a visual information that does not require immediate determination by the user, in which case the robot may temporarily suspend the instruction to continue the current task, when the user happens to be met while the current task is being performed (for example, a microphone device (for example, a microphone array or the like) receives sounds such as close-distance speaking, singing, walking sounds and the like and recognizes that the sounds are made by the set user; further, if the passive thermal infrared device senses that the human infrared radiation of the user comes within a certain range), that is, the user searching unit determines that the user comes within the second predetermined range of the, step S200 (or the step S210 of refining step S200) is performed or in other embodiments, step S150 may be performed first and then step S200 is performed, and finally the visual information is projected to the projection area to form the projection pattern. For the visual information that needs to be determined immediately by the user, the robot needs to actively search the user' S position, then move to the area near the user, and then perform step S200 (or refine step S210 of step S200) or in other embodiments, step S150 may be performed first, then step S200 may be performed, and finally the visual information is projected to the projection area to form the projection pattern. Wherein the user searching unit actively searching the user position can search the user position by collecting the sound (such as the speaking sound of the user, the sound generated by walking, etc.) emitted by the user through the sound pickup device, or the user position is searched by the passive thermal infrared device according to the thermal infrared characteristics sent by the user (for adults and children, the colors of thermal infrared images presented by human bodies are different due to different body temperatures, so that the adults can be distinguished from the children) or the thermal infrared presented outline (for adults with different heights and adults can be distinguished from the children according to different thermal infrared areas), and in addition, if the robot is provided with a positioning device, and wirelessly connected with a mobile device of a user such as a mobile phone, a tablet computer and the like through a network, the user position may be sought through a positioning system of the user's mobile device, and the present embodiment does not limit the way in which the robot seeks the user position. The motion unit may specifically include a chassis, and wheel assemblies or tracks mounted on the chassis, and may further include a code wheel, a gyroscope, an accelerometer, and the like, for calculating motion parameters such as mileage, acceleration, speed, angle, angular acceleration, and the like of the motion, and assisting the robot in positioning, and the motion unit may receive a command from the control unit to operate (for example, how much distance to move to a certain direction, or move to a certain coordinate position in the case of an existing map). It should be noted that moving to the area near the user is not an essential step, and for example, a non-mobile autonomous robot may not have a motion unit, and for such a robot, since the user is generally near the robot, it may project visual information to a suitable position near the user to form a projection pattern after searching for the position of the user.
Further, before acquiring the visual information including the operable item, the method may further include:
S010: receiving a user calling instruction, acquiring a user position (such as through a user searching unit), and controlling the robot to move (such as through a wheel, a crawler belt and other moving units) to an area near the user position according to the user position.
In this embodiment, when the user calls the robot, for example, the user may use a mobile terminal (e.g., a mobile phone, a tablet computer, etc.) to issue a call instruction to the robot through a wireless network, or the user issues a call instruction in the form of a sound, and a microphone of the autonomous robot recognizes the sound of the user (e.g., through a preset voiceprint) and a pre-stored call instruction, and then searches for the user location, moves to an area near the user, and then executes the steps of S100-S400, which may not be moved for the non-mobile autonomous robot.
In addition, the control unit can also acquire the temperature of the robot through the temperature sensor, and when the temperature of the robot is found to be too high (for example, higher than 50 ℃), the alarm unit gives an alarm (for example, sends out voice and/or light); the control unit can also be used for alarming (for example, sending out voice and/or light) by an alarm unit after the thermal infrared sensor finds that the sign of open fire exists.
In one embodiment of the present invention, the robot further comprises a storage unit for storing various types of data and information, including but not limited to a map, a visual information database, a user instruction database, and various image databases for comparison. The storage unit can be a local storage unit, and can also be a cloud database connected through a wired or wireless network.
In an embodiment of the invention, the robot can also receive other forms of user instructions through the input unit, such as touch screen input, voice input and the like, so that multiple input modes can be provided for a user, and the user can conveniently instruct the robot to execute instructions in a mode more convenient for the user.
an embodiment of the present invention provides an example of a human-machine interaction method using the above-described robot, for example, the robot is a cleaning robot with music playing and dancing functions, as shown in fig. 4, when a user calls the cleaning robot (e.g., sends a call instruction by voice or a mobile terminal), the cleaning robot acquires a user location (e.g., through a user search unit) after receiving the user call instruction, and moves to a user location vicinity area through a motion unit according to the user location (S010). the user may actively send a user instruction to play music, use a dancing function through voice, the mobile terminal, or the cleaning robot acquires visual information including an operable item, such as visual information including "what kind of service is needed"? "and an option" cleaning "," dancing "," IOT ", etc., from a visual information database stored in a storage unit through its control unit (S100), and determines whether the visual information is urgently needed for the user operation (S110). if the visual information is in accordance with" urgent user operation "set in the control unit, the visual information is within a second user location range, the visual information is determined by the user search unit (S130), and if the user location is not within the second user location range, the user search unit determines that the visual information is within the second user location range (S140).
After the step 130, or after the step S120, it is determined that the user position is within the second predetermined range, optionally, a continuous regular plane capable of bearing the projection pattern within the first predetermined range is obtained as the projection area according to the surrounding environment image collected by the shooting unit and/or the environment layout pre-stored in the storage unit (S150). Then, optionally, the projection unit is controlled to perform scaling and/or definition adjustment on the projection pattern according to the surrounding image collected by the shooting unit, and/or the size of the projection area collected by a distance measuring device (such as a laser radar, a TOF range finder, an infrared range finder, etc.) and the distance from the projection unit to the projection area, so as to form a clear projection pattern in the projection area (S210, i.e., the refinement step of S200). The cleaning robot collects an operation image of the user with respect to the projection pattern through a photographing unit (for example, clicking a foot to select an option "dancing" in the projection area) by the user among the operable items in the visual information in the projection pattern within the projection area, making a selection or other operation by the user (such operations by the user belong to the operation instruction) (S300), and recognizes the operation instruction of the user from the operation image through a control unit (it is known that the user wishes to turn on the dancing function) (S400). Under the function of the dancing machine, the corresponding execution unit comprises a loudspeaker, a projection unit and a shooting unit, and the loudspeaker plays dancing music; the projection unit projects projection patterns with stepping points to the ground according to the rhythm of the played dance music; the shooting unit collects an operation image of a user projected on a projection pattern on the ground by the projection unit, the control unit identifies user treading point information from the operation image, the control unit compares the identified user treading point information with correct treading point information which is stored in the storage unit and is associated with music rhythm, and feeds back the information to the user to enable the user to know whether the treading point information is correct, and the control unit can also calculate the correct rate of the treading point information of the user and feed back the information to the user (namely, a corresponding execution unit executes an operation instruction). Although the foregoing processes of the present embodiment all describe that a specific hardware module implements a specific function, those skilled in the art should understand that the technical solution of the present invention can be implemented by other devices different from the above-mentioned shooting unit, storage unit, control unit, motion unit, and execution unit, or can be implemented by one device to implement a plurality of or all of the above-mentioned functions, and therefore, the scope of the present invention should not be limited by the above-mentioned hardware module. In addition, the present embodiment is only a preferred embodiment of the technical solution of the present invention, and is used to illustrate various aspects of the technical solution of the present invention, so the details of the present embodiment should not be used to limit the protection scope of the present invention; indeed, those skilled in the art who have the benefit of the teachings and teachings of the foregoing embodiments should be considered to be within the scope of the present invention.
according to the man-machine interaction method of the robot, visual information containing operable items is acquired; controlling a projection unit to project the visual information into a projection area to form a projection pattern; acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit; and identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction. The method of the embodiment can provide a more convenient human-computer interaction method for a user, the user can input instructions to the robot in a silent mode through the means of limbs or tools for prolonging the limb control distance and the like, the variety of the human-computer interaction modes of the robot is increased, the human-computer interaction is more flexible, the operation of a deaf-mute is facilitated, the remote control is not limited by signal transmission or network speed, the influence of environmental noise on the sound control is avoided, and the accuracy is higher.
fig. 5 is a structural diagram of a robot according to an embodiment of the present invention. As shown in fig. 3, the robot 50 of this embodiment includes a projection unit 501, a shooting unit 502, a control unit 500, and an execution unit 503, where the projection unit 501, the shooting unit 502, and the execution unit 503 are respectively electrically connected to the control unit 500, and may be in wired connection or wireless connection.
the control unit 500 is configured to obtain visual information including an operable item, and send the visual information to the projection unit 501;
The projection unit 501 is configured to project the visual information into a projection area to form a projection pattern;
The shooting unit 502 is configured to collect an operation image of the projection pattern by using a user's limb (e.g., a body part such as a hand, a foot, an elbow, etc.) or a tool for extending a control distance of the limb (e.g., a crutch, a laser pen clicks the projection pattern, or an article thrown onto the projection pattern, etc.), and send the operation image to the control unit 500;
The control unit 500 is further configured to identify an operation instruction of the user on the operable item according to the operation image, and control the corresponding execution unit 503 to execute the operation instruction.
Further, the control unit 500 is further configured to: acquiring an area capable of bearing the projection pattern in a first predetermined range as a projection area, wherein the area capable of bearing the projection pattern can comprise an object surface capable of forming the projection pattern through diffuse reflection, and can also comprise a surface of a semitransparent object capable of forming the projection pattern through transmission.
further, the shooting unit 502 may also be configured to collect an image of a surrounding environment and send the image to the control unit 500.
the control unit 500 may be further configured to obtain a continuous regular plane capable of bearing the projection pattern within a first predetermined range as the projection area according to the environment image and/or a pre-stored environment layout.
Further, the robot 50 may further include: and the distance measuring device is configured to acquire the size of the projection area and the distance from the projection unit 501 to the projection area, and send the size and the distance to the control unit 500.
The control unit 500 may be further configured to send an adjustment instruction to the projection unit 501 according to the environment image, and/or the size of the projection area and the distance from the projection unit 501 to the projection area.
the projection unit 501 may be further configured to perform scaling and/or sharpness adjustment on the projection pattern according to the adjustment instruction, so as to form a sharp projection pattern in the projection area.
Further, the shooting unit 502 may be specifically configured to obtain a click operation image and/or a sliding operation image on the projection pattern in a manner of acquiring the user limb, a shadow of the user limb, a tool for extending the limb control distance, and/or a shadow of the tool for extending the limb control distance, which are acquired by the shooting unit 502, and send the click operation image and/or the sliding operation image to the control unit 500;
The control unit 500 may be specifically configured to identify an operable item clicked by the user according to the click operation image (i.e., an image containing a click operation of the user), or identify a sliding track of the user on the operable item according to the sliding operation image (i.e., an image containing a sliding operation of the user), so as to obtain an operation instruction of the corresponding user on the operable item.
Further, as shown in fig. 6, the robot 50 may further include:
A movement unit 505 for moving the position of the robot 50;
a user search unit (not shown in the figure) for acquiring a user position by the search unit and sending the user position to the control unit 500;
The control unit 500 may be further configured to determine whether the visual information is urgent for a user operation; if yes, obtaining a user position through a user searching unit, and moving to an area near the user position through the motion unit 505 according to the user position; if not, judging whether the user position is in a second preset range through the user searching unit; if the current task is not within the second preset range, continuing to execute the current task; if the distance is within the second predetermined range, the projection unit 501 is controlled to project the visual information.
further, the user search unit may include: passive thermal infrared devices and/or sound pickup devices.
Further, the control unit 500 may be further configured to receive a user call instruction, acquire the user location through the user search unit, and move to an area near the user location through the motion unit 505 according to the user location.
Further, the robot 50 may further include a storage unit 504 for storing various types of data and information, including but not limited to a map, a visual information database, a user instruction database, and a various image database for comparison. The storage unit 504 may be a local storage unit 504, or may be a cloud database connected through a wired or wireless network.
The robot 50 may further include an input unit 506, which receives other types of user instructions, such as touch screen input, voice input, etc., and may provide a plurality of input modes for the user, so that the user may instruct the robot to perform an operation instruction in a more convenient mode.
the robot 50 may further include an alarm unit 508 for alarming when the temperature of the robot is too high or an open fire is found, and the alarm unit 508 may also emit a voice and/or light.
The robot 50 may also include a communication unit 507 for connecting and communicating with a terminal and/or a remote server.
It should be noted that, depending on the type of the robot and the execution task, the content of the operable item is different, the problem to be determined and the corresponding selectable option are different, that is, the operation instruction of the user is different, the execution unit 503 may be different, or may be overlapped with the projection unit 501, the shooting unit 502, the motion unit 505, and the like.
The robot 50 provided in the embodiment of the present invention may be specifically configured to execute the method embodiments provided in fig. 1, fig. 3, and fig. 4, and specific functions are not described herein again.
The robot provided by the embodiment acquires visual information containing operable items; controlling a projection unit to project the visual information into a projection area to form a projection pattern; acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit; and identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction. The robot of the embodiment can provide a more convenient human-computer interaction method for a user, the user can input instructions to the robot in a silent mode through the means of limbs or tools for prolonging the limb control distance, the types of human-computer interaction modes of the robot are increased, the human-computer interaction is more flexible, the operation of a deaf-mute is facilitated, the robot is not limited by signal transmission or network speed like remote control, is not influenced by environmental noise like sound control, and has higher accuracy.
in the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. a human-computer interaction method of a robot is characterized by comprising the following steps:
acquiring visual information containing operable items;
Controlling a projection unit to project the visual information into a projection area to form a projection pattern;
Acquiring an operation image of the user on the projection pattern, which is acquired by a shooting unit;
And identifying an operation instruction of the user on the operable item according to the operation image, and controlling a corresponding execution unit to execute the operation instruction.
2. The method of claim 1, wherein before controlling the projection unit to project the visual information into the projection area to form the projection pattern, further comprising:
And acquiring an area capable of bearing the projection pattern in a first preset range as a projection area.
3. the method according to claim 2, wherein the acquiring, as the projection area, an area capable of carrying the projection pattern within a first predetermined range specifically includes:
Acquiring a continuous and regular plane capable of bearing the projection pattern in a first preset range as the projection area according to the surrounding environment image acquired by the shooting unit and/or a pre-stored environment layout; and/or the area capable of bearing the projection pattern comprises an object surface capable of forming the projection pattern through diffuse reflection or an object surface capable of forming the projection pattern through transmission;
And/or
the control projection unit projects the visual information into a projection area to form a projection pattern, and specifically comprises:
And controlling the projection unit to carry out scaling and/or definition adjustment on the projection pattern according to the surrounding environment image acquired by the shooting unit, and/or the size of the projection area acquired by the distance measuring equipment and the distance from the projection unit to the projection area, so as to form a clear projection pattern in the projection area.
4. the method of claim 1, wherein the acquiring of the operation image of the projection pattern by the user, acquired by the shooting unit, comprises:
acquiring a click operation image and/or a sliding operation image of the user limb, the shadow of the user limb, the tool for prolonging the limb control distance and/or the shadow of the tool for prolonging the limb control distance, which are acquired by a shooting unit, on the projection pattern;
and/or
The operation instruction of the user on the operable item is identified according to the operation image, and the operation instruction comprises the following steps:
And identifying the operable item clicked by the user according to the click operation image or identifying the sliding track of the user on the operable item according to the sliding operation image so as to obtain the corresponding operation instruction of the user on the operable item.
5. The method of claim 1, wherein before controlling the projection unit to project the visual information into the projection area to form the projection pattern, further comprising:
judging whether the visual information is in urgent need of user operation;
if yes, obtaining the position of the user, and controlling the robot to move to an area near the position of the user according to the position of the user;
If not, judging whether the user position is in a second preset range; if the user position is not within the second preset range, continuing to execute the current task; and if the user position is within the second preset range, projecting the visual information into a projection area to form a projection pattern.
6. The method according to any one of claims 1 to 5, wherein before acquiring the visual information containing the operable items, the method further comprises:
receiving a user calling instruction, acquiring a user position, and controlling the robot to move to an area near the user position according to the user position.
7. a robot, comprising: the projection unit, the shooting unit, the control unit and the execution unit are respectively and electrically connected with the control unit;
the control unit is used for acquiring visual information containing operable items and sending the visual information to the projection unit;
the projection unit is used for projecting the visual information into a projection area to form a projection pattern;
the shooting unit is used for collecting an operation image of the user on the projection pattern and sending the operation image to the control unit;
the control unit is further used for identifying an operation instruction of the user on the operable item according to the operation image and controlling the corresponding execution unit to execute the operation instruction.
8. The robot of claim 7, wherein the control unit is further configured to:
Acquiring an area capable of bearing the projection pattern in a first preset range as a projection area, wherein the area capable of bearing the projection pattern comprises an object surface capable of forming the projection pattern through diffuse reflection or an object surface capable of forming the projection pattern through transmission;
and/or
The shooting unit is also used for collecting images of the surrounding environment and sending the images to the control unit;
and/or
the control unit is further configured to acquire a continuous and regular plane capable of bearing the projection pattern within a first predetermined range as the projection area according to the environment image and/or a pre-stored environment layout;
And/or
Further comprising: the distance measuring equipment is used for acquiring the size of the projection area and the distance from the projection unit to the projection area and sending the size and the distance to the control unit;
and/or
the control unit is further used for sending an adjusting instruction to the projection unit according to the environment image, and/or the size of the projection area and the distance from the projection unit to the projection area;
And/or
The projection unit is further used for carrying out zooming and/or definition adjustment on the projection pattern according to the adjustment instruction so as to form a clear projection pattern in the projection area;
and/or
The shooting unit is specifically used for acquiring a click operation image and/or a sliding operation image of the user limb, the shadow of the user limb, the tool for extending the limb control distance and/or the shadow of the tool for extending the limb control distance, which are acquired by the shooting unit, on the projection pattern and sending the click operation image and/or the sliding operation image to the control unit;
And/or
The control unit is specifically configured to identify an operable item clicked by a user according to the click operation image, or identify a sliding track of the user for the operable item according to the sliding operation image, so as to obtain an operation instruction of the corresponding user for the operable item.
9. The robot of claim 7, further comprising:
A movement unit for moving a position of the robot;
And/or
the user searching unit is used for acquiring the user position by the searching unit and sending the user position to the control unit;
and/or
The control unit is also used for judging whether the visual information needs to be operated by a user urgently; if so, acquiring the position of the user through a user searching unit, and controlling the robot to move to an area near the position of the user through the motion unit according to the position of the user; if not, judging whether the user position is in a second preset range through the user searching unit; if the user position is not within the second preset range, continuing to execute the current task; and if the user position is within the second preset range, controlling the projection unit to project the visual information into a projection area to form a projection pattern.
10. The robot of claim 9,
the control unit is further configured to receive a user call instruction, acquire the user position through the user search unit, and move to an area near the user position through the motion unit according to the user position.
CN201810583982.XA 2018-06-08 2018-06-08 human-computer interaction method of robot and robot Pending CN110580426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810583982.XA CN110580426A (en) 2018-06-08 2018-06-08 human-computer interaction method of robot and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810583982.XA CN110580426A (en) 2018-06-08 2018-06-08 human-computer interaction method of robot and robot

Publications (1)

Publication Number Publication Date
CN110580426A true CN110580426A (en) 2019-12-17

Family

ID=68809180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810583982.XA Pending CN110580426A (en) 2018-06-08 2018-06-08 human-computer interaction method of robot and robot

Country Status (1)

Country Link
CN (1) CN110580426A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113318410A (en) * 2021-05-31 2021-08-31 集美大学 Running training method
CN113478456A (en) * 2021-07-19 2021-10-08 北京云迹科技有限公司 Wheeled type sports accompanying robot and using method thereof
CN113741680A (en) * 2020-05-27 2021-12-03 北京字节跳动网络技术有限公司 Information interaction method and device
CN114274184A (en) * 2021-12-17 2022-04-05 重庆特斯联智慧科技股份有限公司 Logistics robot man-machine interaction method and system based on projection guidance
WO2022094739A1 (en) * 2020-11-03 2022-05-12 谢建军 Projection system, method, and apparatus, and computer device
CN115120064A (en) * 2022-07-14 2022-09-30 慕思健康睡眠股份有限公司 Learning method based on intelligent mattress, intelligent mattress and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104052950A (en) * 2013-03-11 2014-09-17 日立麦克赛尔株式会社 Manipulation detection apparatus and manipulation detection method
CN104303102A (en) * 2012-05-25 2015-01-21 法国圣戈班玻璃厂 Method for projection or back-projection onto glass comprising a transparent layered element having diffuse reflection properties
CN105301876A (en) * 2015-08-24 2016-02-03 俞茂学 Projection method for intelligent projection robot, and robot employing projection method
CN105657304A (en) * 2014-11-12 2016-06-08 中兴通讯股份有限公司 Method and apparatus for controlling projection display
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device
CN106297083A (en) * 2016-07-29 2017-01-04 广州市沃希信息科技有限公司 A kind of market shopping method, shopping server and shopping robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104303102A (en) * 2012-05-25 2015-01-21 法国圣戈班玻璃厂 Method for projection or back-projection onto glass comprising a transparent layered element having diffuse reflection properties
CN104052950A (en) * 2013-03-11 2014-09-17 日立麦克赛尔株式会社 Manipulation detection apparatus and manipulation detection method
CN105657304A (en) * 2014-11-12 2016-06-08 中兴通讯股份有限公司 Method and apparatus for controlling projection display
CN105301876A (en) * 2015-08-24 2016-02-03 俞茂学 Projection method for intelligent projection robot, and robot employing projection method
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106297083A (en) * 2016-07-29 2017-01-04 广州市沃希信息科技有限公司 A kind of market shopping method, shopping server and shopping robot
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741680A (en) * 2020-05-27 2021-12-03 北京字节跳动网络技术有限公司 Information interaction method and device
WO2022094739A1 (en) * 2020-11-03 2022-05-12 谢建军 Projection system, method, and apparatus, and computer device
CN113318410A (en) * 2021-05-31 2021-08-31 集美大学 Running training method
CN113478456A (en) * 2021-07-19 2021-10-08 北京云迹科技有限公司 Wheeled type sports accompanying robot and using method thereof
CN114274184A (en) * 2021-12-17 2022-04-05 重庆特斯联智慧科技股份有限公司 Logistics robot man-machine interaction method and system based on projection guidance
CN114274184B (en) * 2021-12-17 2024-05-24 重庆特斯联智慧科技股份有限公司 Logistics robot man-machine interaction method and system based on projection guidance
CN115120064A (en) * 2022-07-14 2022-09-30 慕思健康睡眠股份有限公司 Learning method based on intelligent mattress, intelligent mattress and storage medium

Similar Documents

Publication Publication Date Title
CN110580426A (en) human-computer interaction method of robot and robot
US11029767B2 (en) System and method for determining 3D orientation of a pointing device
Waldherr et al. A gesture based interface for human-robot interaction
Wilson et al. XWand: UI for intelligent spaces
US20180314329A1 (en) Gaze detection in a 3D mapping environment
KR20190100957A (en) Automatic control of wearable display device based on external conditions
CN103348305B (en) Controlled attitude system uses proprioception to create absolute reference system
US20090251559A1 (en) User interface system based on pointing device
JP5318623B2 (en) Remote control device and remote control program
US11449150B2 (en) Gesture control systems with logical states
US9874977B1 (en) Gesture based virtual devices
CN113116224B (en) Robot and control method thereof
CN109145847B (en) Identification method and device, wearable device and storage medium
US11188145B2 (en) Gesture control systems
KR20210136043A (en) Interacting with smart devices using pointing controllers
CN109241900B (en) Wearable device control method and device, storage medium and wearable device
CN113419634A (en) Display screen-based tourism interaction method
JP3792907B2 (en) Hand pointing device
KR20220047637A (en) Interactive attraction system and method for associating an object with a user
CN113342176A (en) Immersive tourism interactive system
CN111919250A (en) Intelligent assistant device for conveying non-language prompt
US11863963B2 (en) Augmented reality spatial audio experience
US20230218984A1 (en) Methods and systems for interactive gaming platform scene generation utilizing captured visual data and artificial intelligence-generated environment
KR20210116838A (en) Electronic device and operating method for processing a voice input based on a gesture
Sorger Alternative User Interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191217