CN110007765A - A kind of man-machine interaction method, device and equipment - Google Patents

A kind of man-machine interaction method, device and equipment Download PDF

Info

Publication number
CN110007765A
CN110007765A CN201910290169.8A CN201910290169A CN110007765A CN 110007765 A CN110007765 A CN 110007765A CN 201910290169 A CN201910290169 A CN 201910290169A CN 110007765 A CN110007765 A CN 110007765A
Authority
CN
China
Prior art keywords
interaction
user
posture
standard
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910290169.8A
Other languages
Chinese (zh)
Inventor
施雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Star Vision Technology Co Ltd
Original Assignee
Shanghai Star Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Star Vision Technology Co Ltd filed Critical Shanghai Star Vision Technology Co Ltd
Priority to CN201910290169.8A priority Critical patent/CN110007765A/en
Publication of CN110007765A publication Critical patent/CN110007765A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of man-machine interaction method, device and equipment.Wherein, this method comprises: when detecting that user enters interaction area, according to preset interaction style, at least one interaction round is generated;Under each interaction round, generates standard pose presentation and shown, and obtain the user images at least one matched user of interaction style in the interaction figure picture acquired in real time, standard pose presentation is associated with Standard User posture;Active user's posture of user is identified in each user images, and active user's posture is matched with Standard User posture, obtains interaction result corresponding with interaction round;It is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information.The embodiment of the present invention can be interacted by gesture recognition with user, improve the interaction interest of intelligent interaction equipment, provide a variety of interaction modes, enhance user experience.

Description

A kind of man-machine interaction method, device and equipment
Technical field
The present embodiments relate to intelligent interaction technology more particularly to a kind of man-machine interaction methods, device and equipment.
Background technique
" human-computer interaction " is exactly exchanging between people and machine and operation.With the development of intelligent interaction technology, Yong Huke To carry out human-computer interaction, behaviour by the various ways such as voice, arm action, head tracking, vision tracking and intelligent interaction equipment Make intelligent interaction equipment and completes specified instruction task.
Currently, intelligent interaction technology is usually touch screen interaction, speech recognition interaction, recognition of face interaction, gesture identification friendship Mutually etc..
In the implementation of the present invention, the discovery prior art is unable to satisfy people to intelligent interaction equipment increasingly to inventor The problem of demand of the interest and appeal of growth.
Summary of the invention
The embodiment of the present invention provides a kind of man-machine interaction method, device and equipment, and the new of intelligent interaction equipment can be improved Newness and interest enhance user experience.
In a first aspect, the embodiment of the invention provides a kind of man-machine interaction methods, comprising:
When detecting that user enters interaction area, according to preset interaction style, at least one interaction round is generated;
Under each interaction round, generates standard pose presentation and shown, and obtained in the interaction figure picture acquired in real time The user images at least one matched user of interaction style are taken, standard pose presentation is associated with Standard User posture;
Active user's posture of user is identified in each user images, and by active user's posture and Standard User posture It is matched, obtains interaction result corresponding with interaction round;
It is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information.
Second aspect, the embodiment of the invention also provides a kind of human-computer interaction devices, comprising:
Round generation module is interacted, for according to preset interaction style, giving birth to when detecting that user enters interaction area Round is interacted at least one;
User images obtain module, are shown under each interaction round, generating standard pose presentation, and in reality When the interaction figure picture that acquires in obtain user images at least one matched user of interaction style, standard pose presentation and mark Mutatis mutandis family posture association;
It interacts result and obtains module, for identifying active user's posture of user in each user images, and will be current User's posture is matched with Standard User posture, obtains interaction result corresponding with interaction round;
Result display module, for being interacted according to corresponding at least one interaction round as a result, generating result feedback letter Breath is shown.
The third aspect, the embodiment of the invention also provides a kind of terminal devices, comprising:
One or more processors;
Memory, for storing one or more programs;
When one or more programs are executed by one or more processors, so that one or more processors realize such as this hair Man-machine interaction method described in bright any embodiment.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program realizes the man-machine interaction method as described in any embodiment of that present invention when the computer program is executed by processor.
The embodiment of the present invention is by according to preset interaction style, generating extremely when detecting that user enters interaction area A few interaction round generates standard pose presentation and is shown, and in the interaction figure acquired in real time under each interaction round The user images at least one matched user of interaction style are obtained as in, standard pose presentation and Standard User posture are closed Then connection identifies active user's posture of user in each user images, and by active user's posture and Standard User posture Matched, obtain and interact round it is corresponding interaction as a result, simultaneously according to at least one interaction round it is corresponding interaction as a result, It generates result feedback information and is shown solve the prior art and be unable to satisfy people's interest growing to intelligent interaction equipment The problem of demand of taste, can be interacted by gesture recognition with user, improve the interaction interest of intelligent interaction equipment, A variety of interaction modes are provided, user experience is enhanced.
Detailed description of the invention
Fig. 1 a is a kind of flow chart for man-machine interaction method that the embodiment of the present invention one provides;
Fig. 1 b is a kind of schematic diagram for active user's posture that the embodiment of the present invention one provides;
Fig. 1 c is a kind of schematic diagram for Standard User posture that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow chart of man-machine interaction method provided by Embodiment 2 of the present invention;
Fig. 3 is a kind of flow chart for man-machine interaction method that the embodiment of the present invention three provides;
Fig. 4 is a kind of flow chart for man-machine interaction method that the embodiment of the present invention four provides;
Fig. 5 is a kind of structural schematic diagram for human-computer interaction device that the embodiment of the present invention five provides;
Fig. 6 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention six provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 a is a kind of flow chart for man-machine interaction method that the embodiment of the present invention one provides.The present embodiment be applicable into The case where row human-computer interaction, this method can be executed by human-computer interaction device, which can be using software and/or hardware Mode realizes that the device can be configured at computer equipment, for example, in terminal device.As shown in Figure 1a, this method specifically includes Following steps:
Step 101, when detecting that user enters interaction area, according to preset interaction style, generate at least one mutually Driving wheel.
Wherein, terminal device can be the computer equipment for being interacted with user.For example, intelligent interaction is integrated Machine.Interaction area can be the region in the set distance (for example, 5m) before terminal device screen.Before terminal device screen In set distance (for example, 5m), terminal device can be identified user.User enter can identified interaction area, eventually End equipment is identified.If terminal device identifies successfully, determine that user enters interaction area, according to preset interaction class Type generates at least one interaction round.
Optionally, terminal device can carry out recognition of face and the identification of bone key point.Specifically, user, which enters, to be known Other interaction area, terminal device carry out Image Acquisition to interaction area, obtain acquisition image.Terminal device to acquisition image into Whether row Face datection, detecting in interaction figure picture has personage.If detecting there is personage in interaction figure picture, recognition of face at Function continues the identification of bone key point, shows, sentence to the bone key point of recognized one or multidigit user Determine one or multidigit user enters interaction area, and enter interactive state, terminal device can be according to preset interaction class Type generates at least one interaction round, interacts with user.
Preset interaction style may include: more people's battle modes and single break-through mode.Interaction round is time of interaction Number.For example, generating three interaction rounds according to more people's battle modes.Participate in more people's battle modes several users need into The interaction of row three-wheel, obtains final interaction result according to the interaction result of every wheel.According to single break-through mode, three interactions are generated Round.The user for participating in single break-through mode needs to carry out three-wheel interaction, is obtained according to the interaction result of every wheel final mutual Dynamic result.
Step 102, under each interaction round, generate standard pose presentation and shown, and in the interaction acquired in real time The user images at least one matched user of interaction style are obtained in image, standard pose presentation and Standard User posture are closed Connection.
Wherein, standard pose presentation is associated with Standard User posture.Standard pose presentation is for prompting user to show standard User's posture.Optionally, standard pose presentation can be the picture comprising Standard User posture.Standard pose presentation can also wrap The title for the Standard User posture that the user containing prompt does.User can show according to the title of the Standard User posture on picture Standard User posture.For example, entitled " the lifting left hand, lift right crus of diaphragm " of Standard User posture.
It under each interaction round, generates standard pose presentation and is shown, image is carried out to the user in interaction area Acquisition, obtains interaction figure picture.Terminal device obtained in the interaction figure picture acquired in real time with interaction style it is matched at least one The user images of user.
Specifically, can based on open source computer vision library (Open Source Computer Vision Library, OpenCV Face datection) is carried out to interaction figure picture, whether detect in interaction figure picture has personage.If detecting in interaction figure picture has Personage, then by area-of-interest (Region of Interest, ROI) image cutting technique, to the personage in interaction figure picture Part is cut, and the user images at least one matched user of interaction style are obtained.Optionally, by the figure of user images As size is adjusted in the same size with the image of standard pose presentation.If detecting there is no personage in interaction figure picture, again Image Acquisition is carried out to the user in interaction area, obtains new interaction figure picture.
In a specific example, interaction style is more people's battle modes.Under each interaction round, standard posture is generated It image and is shown with the matched at least two Image Acquisition prompting frame of participation number of more people's battle modes.Image Acquisition mentions Show that frame is used to prompt user during more people fight, the image capture position for needing to be located at.According in interaction figure picture with image The corresponding image reference position of prompting frame is acquired, in the interaction figure picture acquired in real time, is obtained matched with more people's battle modes The user images of at least two users.
In another specific example, interaction style is single break-through mode.Under each interaction round, standard appearance is generated State image and an Image Acquisition prompting frame are shown that Image Acquisition prompting frame is for prompting user in single break-through process In, the image capture position that needs to be located at.According to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture, In the interaction figure picture acquired in real time, the user images with a user of single break-through pattern match are obtained.
Step 103, active user's posture that user is identified in each user images, and by active user's posture and standard User's posture matches, and obtains interaction result corresponding with interaction round.
Wherein, active user's posture of user is identified in each user images.Obtain bone corresponding with user images Crucial point data.Optionally, the bone key point for including in character image is identified.Bone key point may include following at least two : the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle and right ankle. Coordinate system is established by origin of the picture centre of user images, obtains the coordinate position of whole bone key points in a coordinate system. Line is carried out at least two setting bone key points, and is obtained and at least one setting matched two bone of bone key point Angle between key point line.For example, such as left shoulder and left elbow connecting line, left elbow and left wrist connecting line, angle between the two; Left hip and left knee connecting line, left knee and left ankle connecting line, angle etc. between the two.The coordinate position and angle that will acquire are made For bone key point data.Then according to bone key point data to bone key point and being identified of line, as with Active user's posture at family.
Fig. 1 b is a kind of schematic diagram for active user's posture that the embodiment of the present invention one provides.Bone key point includes: head Top, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle and right ankle.According to bone Bone key point data is to being identified of line between 14 bone key points and bone key point, as working as user Preceding user's posture.
Optionally, according to standard bone key point data to the company between standard bone key point and bone key point Being identified of line, as Standard User posture.Standard bone key point data include standard bone bone key point in a coordinate system Coordinate position, and by standard bone key point at least two setting bone key points carry out lines, acquisition With the angle between at least one setting matched two standards bone key point line of bone key point.Standard bone key point May include following at least two: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, Right knee, left ankle and right ankle.
Fig. 1 c is a kind of schematic diagram for Standard User posture that the embodiment of the present invention one provides.Standard bone key point packet It includes: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle and right ankle. The line between 14 standard bone key points and standard bone key point is marked according to standard bone key point data Knowledgeization, as Standard User posture.
Active user's posture is matched with Standard User posture, obtains interaction result corresponding with interaction round.Tool Body, bone key point is set by the coordinate position of the bone key point in bone key point data, and at least one The angle between two bone key point lines matched is closed with the standard bone in corresponding standard bone key point data respectively The coordinate position of key point, and between at least one setting matched two standards bone key point line of bone key point Angle is matched.
Optionally, according in matching result, the coordinate position of each bone key point is crucial with corresponding standard bone Point coordinate position position deviation and each setting the matched two bones key point line of bone key point between Angle with it is corresponding at least one setting the matched two standards bone key point line of bone key point between angle Angular deviation, give a mark to user images, obtain the interaction score of user.Then according to interaction score, judge current use Whether family posture is consistent with Standard User posture, determines interaction result corresponding with interaction round.
In a specific example, interaction style is more people's battle modes.Working as user is identified in each user images Preceding user's posture, and respectively match active user's posture with Standard User posture;According to matching result, to each user Image is given a mark, and the interaction score of each user is obtained;According to the user of interaction highest scoring, determine corresponding with interaction round Interaction result.Alternatively, determining and interacting according to the numerical relation between the interaction score and preset fraction threshold value of each user The corresponding interaction result of round.
In another specific example, interaction style is single break-through mode.Identify that user's is current in user images User's posture, and active user's posture is matched with Standard User posture;According to matching result, user images are beaten Point, obtain the interaction score of user;It is determining and mutual according to the numerical relation between the interaction score and preset fraction threshold value of user The corresponding interaction result of driving wheel time is that the check-in succeeds or failed.
Step 104 is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information Show.
Wherein, as a result feedback information can be two dimensional code.Terminal device can be according to corresponding at least one interaction round Interaction as a result, generate two dimensional code, and the two dimensional code is shown by screen.User can be by the two dimension that shows on barcode scanning screen Code extracts the photo shot in interactive process and corresponding welfare.
In a specific example, interaction style is two people's battle modes.Generate three interaction rounds.Participate in two people battle Two users of mode need to carry out three-wheel interaction, obtain final interaction result: two users according to the interaction result of every wheel In a user be winning user in battle.Terminal device shows final interaction as a result, simultaneously according to mutually by screen It moves as a result, generation two dimensional code, then shows the two dimensional code by screen, so that winning user can be by showing on barcode scanning screen Two dimensional code extract the photo that shoots and corresponding welfare in interactive process.
In another specific example, interaction style is single break-through mode, generates three interaction rounds.Participate in one The user of break-through mode needs to carry out three-wheel interaction.Final interaction result is obtained according to the interaction result of every wheel.If three-wheel The interaction result of interaction is all that the check-in succeeds, then final interaction result is that the check-in succeeds by user.Terminal device is aobvious by screen Show that the check-in succeeds, and generate two dimensional code, which is shown by screen, so that user can be by showing on barcode scanning screen Two dimensional code extracts the photo shot in interactive process and corresponding welfare.
The embodiment of the invention provides a kind of man-machine interaction method, by when detecting that user enters interaction area, root According to preset interaction style, at least one interaction round is generated, under each interaction round, standard pose presentation is generated and is shown Show, and obtains the user images at least one matched user of interaction style, standard appearance in the interaction figure picture acquired in real time State image is associated with Standard User posture, active user's posture of user is then identified in each user images, and will be current User's posture is matched with Standard User posture, obtain with interaction round it is corresponding interaction as a result, simultaneously according to and at least one The corresponding interaction of interaction round is shown solve the prior art and be unable to satisfy people couple as a result, generating result feedback information The problem of demand of the growing interest and appeal of intelligent interaction equipment, can be interacted by gesture recognition with user, be improved The interaction interest of intelligent interaction equipment provides a variety of interaction modes, enhances user experience.
Embodiment two
Fig. 2 is a kind of flow chart of man-machine interaction method provided by Embodiment 2 of the present invention.The present embodiment can with it is above-mentioned Each optinal plan combines in one or more embodiment, and in the present embodiment, interaction style may include: that more people fight mould Formula.
And under each interaction round, generates standard pose presentation and shown, and in the interaction figure picture acquired in real time The user images of middle acquisition and at least one matched user of interaction style may include: to generate mark under each interaction round It quasi- pose presentation and is shown with the matched at least two Image Acquisition prompting frame of participation number of more people's battle modes, image Acquisition prompting frame is for prompting user during more people fight, the image capture position for needing to be located at;According in interaction figure picture Image reference corresponding with Image Acquisition prompting frame position obtains and more people's battle modes in the interaction figure picture acquired in real time The user images of matched at least two user.
And active user's posture of user is identified in each user images, and active user's posture and standard are used Family posture is matched, and obtains interaction corresponding with interaction round as a result, may include: to identify user in each user images Active user's posture, and active user's posture is matched with Standard User posture respectively;According to matching result, to each User images are given a mark, and the interaction score of each user is obtained;According to the user of interaction highest scoring, determines and interact round Corresponding interaction as a result, alternatively, according to the numerical relation between the interaction score and preset fraction threshold value of each user, determine and Interact the corresponding interaction result of round.
As shown in Fig. 2, this method specifically comprises the following steps:
Step 201, when detecting that user enters interaction area, according to preset interaction style, generate at least one mutually Driving wheel.
Step 202, under each interaction round, generate standard pose presentation and the participation number with more people's battle modes At least two Image Acquisition prompting frames matched are shown that Image Acquisition prompting frame is for prompting user to fight process in more people In, the image capture position for needing to be located at, standard pose presentation is associated with Standard User posture.
Wherein, Image Acquisition prompting frame is for prompting user during more people fight, the Image Acquisition position for needing to be located at It sets.For example, interactive model is two people's battle modes.It generates two Image Acquisition prompting frames to be shown, participates in two of battle User selects an Image Acquisition prompting frame respectively, the image capture position suggested by Image Acquisition prompting frame of standing.To interaction Two users in region carry out Image Acquisition, obtain interaction figure picture.
Step 203, according to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture, what is acquired in real time In interaction figure picture, the user images with matched at least two user of more people's battle modes are obtained.
Wherein, image reference position be in interaction figure picture with station the image capture position suggested by Image Acquisition prompting frame The corresponding image-region of user.When user is according to Image Acquisition prompting frame, stands and carry out image in corresponding image capture position When acquisition, the character image of the user is located at the image reference position in interaction figure picture.According in interaction figure picture with Image Acquisition The corresponding image reference position of prompting frame, cuts interaction figure picture, obtains and matched at least two user of interaction style User images.
Optionally, it is closed previously according to the image capture position of terminal device is corresponding with the image-region in interaction figure picture System, determines Image Acquisition prompting frame, and image reference corresponding with Image Acquisition prompting frame position.
Step 204, in each user images identify user active user's posture, and respectively by active user's posture with Standard User posture is matched.
Step 205, according to matching result, give a mark to each user images, obtain the interaction score of each user.
Step 206, according to interaction highest scoring user, determine with interaction round it is corresponding interact as a result, alternatively, according to Numerical relation between the interaction score and preset fraction threshold value of each user determines interaction result corresponding with interaction round.
Wherein it is possible to be ranked up according to interaction score of the sequence from high to low to each user, will come first is User, that is, interact highest scoring user, be determined as currently interaction round win user.It is corresponding with interaction round mutual Dynamic result is user triumph.
Optionally, it can also be determined according to the numerical relation between the interaction score and preset fraction threshold value of each user Interaction result corresponding with interaction round.For example, interactive model is two people's battle modes.It is respectively compared interacting for two users Divide the size with preset fraction threshold value.Interaction score is greater than preset fraction threshold value and shows active user's posture and Standard User posture Unanimously, the check-in succeeds.Interaction score is less than or equal to preset fraction threshold value and shows that active user's posture and Standard User posture are different It causes, failed.
If the interaction score of two users is both greater than preset fraction threshold value, all the check-in succeeds by two users, and interacts The corresponding interaction result of round is draw;If the interaction score of two users, which is both less than, is equal to preset fraction threshold value, two User failed, and interaction result corresponding with interaction round is draw;If the interaction score of a user is greater than default The interaction score of score threshold, another user is less than or equal to preset fraction threshold value, then only interaction score is greater than preset fraction The check-in succeeds by the user of threshold value, and interaction result corresponding with interaction round is user triumph.
Step 207 is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information Show.
The embodiment of the invention provides a kind of man-machine interaction methods, by generating and more people couple under each interaction round The matched at least two Image Acquisition prompting frame of the participation number of war mode is shown that Image Acquisition prompting frame is used for prompting Family is during more people fight, the image capture position for needing to be located at, then according in interaction figure picture with Image Acquisition prompting frame Corresponding image reference position obtains and uses with more people's battle modes matched at least two in the interaction figure picture acquired in real time The user images at family, and, active user's posture of user is identified in each user images, and respectively by active user's posture It is matched with Standard User posture;It according to matching result, gives a mark to each user images, obtains the interaction of each user Score determines interaction corresponding with interaction round as a result, alternatively, according to each user's according to the user of interaction highest scoring The numerical relation between score and preset fraction threshold value is interacted, determines interaction corresponding with interaction round as a result, can be effective It obtains user images and carries out gesture recognition, can be interacted simultaneously with multiple users by more people's battle modes, improve intelligence The interaction interest of interaction device.
Embodiment three
Fig. 3 is a kind of flow chart for man-machine interaction method that the embodiment of the present invention three provides.The present embodiment can with it is above-mentioned Each optinal plan combines in one or more embodiment, and in the present embodiment, interaction style may include: single break-through mould Formula.
And under each interaction round, generates standard pose presentation and shown, and in the interaction figure picture acquired in real time The user images of middle acquisition and at least one matched user of interaction style may include: to generate mark under each interaction round Quasi- pose presentation and an Image Acquisition prompting frame are shown that Image Acquisition prompting frame is for prompting user to make a breakthrough at one Cheng Zhong, the image capture position for needing to be located at;According to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture, In the interaction figure picture acquired in real time, the user images with a user of single break-through pattern match are obtained.
And active user's posture of user is identified in each user images, and active user's posture and standard are used Family posture is matched, and obtains interaction corresponding with interaction round as a result, may include: to identify working as user in user images Preceding user's posture, and active user's posture is matched with Standard User posture;According to matching result, user images are carried out Marking, obtains the interaction score of user;According to the numerical relation between the interaction score and preset fraction threshold value of user, determine with Interacting the corresponding interaction result of round is that the check-in succeeds or failed.
As shown in figure 3, this method specifically comprises the following steps:
Step 301, when detecting that user enters interaction area, according to preset interaction style, generate at least one mutually Driving wheel.
Step 302, under each interaction round, generate standard pose presentation and an Image Acquisition prompting frame and shown Show, Image Acquisition prompting frame is for prompting user during single make a breakthrough, the image capture position for needing to be located at, standard posture Image is associated with Standard User posture.
Wherein, Image Acquisition prompting frame is for prompting user during more people fight, the Image Acquisition position for needing to be located at It sets.It generates an Image Acquisition prompting frame to be shown, so as to participate in the single subscriber station made a breakthrough in Image Acquisition prompting frame institute The image capture position of prompt.Image Acquisition is carried out to the user in interaction area, obtains interaction figure picture.
Step 303, according to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture, what is acquired in real time In interaction figure picture, the user images with a user of single break-through pattern match are obtained.
Wherein, image reference position be in interaction figure picture with station the image capture position suggested by Image Acquisition prompting frame The corresponding image-region of user.When user is according to Image Acquisition prompting frame, stands and carry out image in corresponding image capture position When acquisition, the character image of the user is located at the image reference position in interaction figure picture.According in interaction figure picture with Image Acquisition The corresponding image reference position of prompting frame, cuts interaction figure picture, obtains the use with the matched user of interaction style Family image.
Optionally, it is closed previously according to the image capture position of terminal device is corresponding with the image-region in interaction figure picture System, determines Image Acquisition prompting frame, and image reference corresponding with Image Acquisition prompting frame position.
Step 304, active user's posture that user is identified in user images, and by active user's posture and Standard User Posture is matched.
Step 305, according to matching result, give a mark to user images, obtain the interaction score of user.
Step 306, according to the numerical relation between the interaction score and preset fraction threshold value of user, determine and interact round Corresponding interaction result is that the check-in succeeds or failed.
Wherein, compare the interaction score of user and the size of preset fraction threshold value.It interacts score and is greater than preset fraction threshold value Show that active user's posture is consistent with Standard User posture, the check-in succeeds.Interaction score shows less than or equal to preset fraction threshold value Active user's posture and Standard User posture are inconsistent, failed.
If the interaction score of user is greater than preset fraction threshold value, with the corresponding interaction result of interaction round be break-through at Function;If the interaction score of user is less than or equal to preset fraction threshold value, interaction result corresponding with interaction round is to make a breakthrough to lose It loses.
Step 307 is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information Show.
The embodiment of the invention provides a kind of man-machine interaction methods, by generating an image under each interaction round Acquisition prompting frame is shown that Image Acquisition prompting frame is for prompting user during single make a breakthrough, the image for needing to be located at Acquisition position, then according to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture, in the friendship acquired in real time In mutual image, the user images with a user of single break-through pattern match are obtained, and, by active user's posture and standard User's posture matches;It according to matching result, gives a mark to user images, obtains the interaction score of user, then basis Numerical relation between the interaction score and preset fraction threshold value of user determines that interaction result corresponding with interaction round is to make a breakthrough Success or failed, can effectively obtain user images carry out gesture recognition, can by single break-through mode simultaneously with User interacts, and improves the interaction interest of intelligent interaction equipment.
Example IV
Fig. 4 is a kind of flow chart for man-machine interaction method that the embodiment of the present invention four provides.The present embodiment can with it is above-mentioned Each optinal plan combines in one or more embodiment, in the present embodiment, identifies that user's is current in user images User's posture may include: acquisition bone key point data corresponding with user images;Worked as according to the determination of bone key point data Preceding user's posture.
As shown in figure 4, this method specifically comprises the following steps:
Step 401, when detecting that user enters interaction area, according to preset interaction style, generate at least one mutually Driving wheel.
Step 402, under each interaction round, generate standard pose presentation and shown, and in the interaction acquired in real time The user images at least one matched user of interaction style are obtained in image, standard pose presentation and Standard User posture are closed Connection.
Step 403 obtains bone key point data corresponding with user images.
Optionally, bone key point data corresponding with character image is obtained, may include: in identification character image includes Bone key point, bone key point includes following at least two: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, the right side Wrist, left hip, right hip, left knee, right knee, left ankle and right ankle.Coordinate system is established by origin of the picture centre of user images, is obtained The coordinate position of whole bone key points in a coordinate system.To at least two setting bone key points carry out lines, and obtain with Angle between at least one setting matched two bones key point line of bone key point.For example, as left shoulder and left elbow connect Wiring, left elbow and left wrist connecting line, angle between the two;Left hip and left knee connecting line, left knee and left ankle connecting line, the two it Between angle etc..The coordinate position and angle that will acquire are as bone key point data.
Optionally, bone key point includes: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, the right side Hip, left knee, right knee, left ankle and right ankle.
Step 404 determines active user's posture according to bone key point data, and by active user's posture and Standard User Posture is matched, and interaction result corresponding with interaction round is obtained.
Wherein, the coordinate position and angle that will acquire are as bone key point data.Then it is counted according to bone key According to active user's posture to bone key point and being identified of line, as user.
Optionally, according to bone key point data to the line between 14 bone key points and bone key point into Line identifier, active user's posture as user.
The line between standard bone key point and bone key point is marked according to standard bone key point data Knowledgeization, as Standard User posture.Standard bone key point data include the coordinate bit of standard bone bone key point in a coordinate system Set, and by carrying out lines at least two setting bone key points in standard bone key point, acquisition at least one Angle between a setting matched two standards bone key point line of bone key point.Standard bone key point may include Following at least two: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle And right ankle.
Optionally, standard bone key point includes: the crown, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, a left side Hip, right hip, left knee, right knee, left ankle and right ankle.According to standard bone key point data to 14 standard bone key points, with And being identified of line between standard bone key point, as Standard User posture.
Active user's posture is matched with Standard User posture, obtains interaction result corresponding with interaction round.Tool Body, bone key point is set by the coordinate position of the bone key point in bone key point data, and at least one The angle between two bone key point lines matched is closed with the standard bone in corresponding standard bone key point data respectively The coordinate position of key point, and between at least one setting matched two standards bone key point line of bone key point Angle is matched.
Optionally, according in matching result, the coordinate position of each bone key point is crucial with corresponding standard bone Point coordinate position position deviation and each setting the matched two bones key point line of bone key point between Angle with it is corresponding at least one setting the matched two standards bone key point line of bone key point between angle Angular deviation, give a mark to user images, obtain the interaction score of user.Then according to interaction score, judge current use Whether family posture is consistent with Standard User posture, determines interaction result corresponding with interaction round.
Step 405 is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information Show.
The embodiment of the invention provides a kind of man-machine interaction methods, by obtaining bone key point corresponding with user images Data, and active user's posture is determined according to bone key point data, it can be worked as by identifying the bone key point determination of user Preceding user's posture can judge whether active user's posture is consistent with Standard User posture according to bone key point data, determine Interaction result corresponding with interaction round.
Embodiment five
Fig. 5 is a kind of structural schematic diagram for human-computer interaction device that the embodiment of the present invention five provides.As shown in figure 5, described Device is configured at terminal device, comprising: interaction round generation module 501, user images obtain module 502, interaction result obtains Module 503 and result display module 504.
Wherein, interact round generation module 501, for when detecting that user enters interaction area, according to it is preset mutually Dynamic type generates at least one interaction round;User images obtain module 502, for generating standard under each interaction round Pose presentation is shown, and the use at least one matched user of interaction style is obtained in the interaction figure picture acquired in real time Family image, standard pose presentation are associated with Standard User posture;It interacts result and obtains module 503, in each user images Active user's posture of middle identification user, and active user's posture is matched with Standard User posture, it obtains and mutual driving wheel Secondary corresponding interaction result;Result display module 504, for being interacted according to corresponding at least one interaction round as a result, raw It is shown at result feedback information.
The embodiment of the invention provides a kind of human-computer interaction device, by when detecting that user enters interaction area, root According to preset interaction style, at least one interaction round is generated, under each interaction round, standard pose presentation is generated and is shown Show, and obtains the user images at least one matched user of interaction style, standard appearance in the interaction figure picture acquired in real time State image is associated with Standard User posture, active user's posture of user is then identified in each user images, and will be current User's posture is matched with Standard User posture, obtain with interaction round it is corresponding interaction as a result, simultaneously according to and at least one The corresponding interaction of interaction round is shown solve the prior art and be unable to satisfy people couple as a result, generating result feedback information The problem of demand of the growing interest and appeal of intelligent interaction equipment, can be interacted by gesture recognition with user, be improved The interaction interest of intelligent interaction equipment provides a variety of interaction modes, enhances user experience.
Optionally, based on the above technical solution, standard pose presentation can be the figure comprising Standard User posture Piece, or the picture of the Standard User posture title comprising prompting user to do.
Optionally, based on the above technical solution, interaction style may include: more people's battle modes;User images Obtaining module 502 may include: the first display unit, under each interaction round, generation standard pose presentation and with it is more The matched at least two Image Acquisition prompting frame of the participation number of people's battle mode shows that Image Acquisition prompting frame is for mentioning Show user during more people fight, the image capture position for needing to be located at;First acquisition unit, for according in interaction figure picture Image reference corresponding with Image Acquisition prompting frame position obtains and more people's battle modes in the interaction figure picture acquired in real time The user images of matched at least two user.
Optionally, based on the above technical solution, interaction style may include: single break-through mode;User images Obtaining module 502 may include: the second display unit, for generating standard pose presentation and one under each interaction round Image Acquisition prompting frame is shown that Image Acquisition prompting frame needs to be located at for prompting user during single make a breakthrough Image capture position;Second acquisition unit, for according to image reference corresponding with Image Acquisition prompting frame position in interaction figure picture It sets, in the interaction figure picture acquired in real time, obtains the user images with a user of single break-through pattern match.
Optionally, based on the above technical solution, it may include: the first matching list that interaction result, which obtains module 503, Member, for identifying active user's posture of user in each user images, and respectively by active user's posture and Standard User Posture is matched;First marking unit, for giving a mark to each user images, obtaining each use according to matching result The interaction score at family;First determination unit determines interaction corresponding with interaction round for the user according to interaction highest scoring As a result, alternatively, determining according to the numerical relation between the interaction score and preset fraction threshold value of each user and interacting round pair The interaction result answered.
Optionally, based on the above technical solution, it may include: the second matching list that interaction result, which obtains module 503, Member is carried out for identifying active user's posture of user in user images, and by active user's posture and Standard User posture Matching;Second marking unit, for giving a mark to user images, obtaining the interaction score of user according to matching result;Second Determination unit determines for the numerical relation between the interaction score and preset fraction threshold value according to user and interacts round pair The interaction result answered is that the check-in succeeds or failed.
Optionally, based on the above technical solution, it may include: data acquisition list that user images, which obtain module 502, Member, for obtaining bone key point data corresponding with user images;Posture determination unit, for according to bone key point data Determine active user's posture.
Optionally, based on the above technical solution, data capture unit may include: identification subelement, for knowing The bone key point that others includes in object image, bone key point include following at least two: the crown, neck, left shoulder, right shoulder, Left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle and right ankle;Position acquisition subelement, for with The picture centre of family image is that origin establishes coordinate system, obtains the coordinate position of whole bone key points in a coordinate system;Angle Subelement is obtained, for carrying out line at least two setting bone key points, and is obtained crucial at least one setting bone Angle between the matched two bones key point line of point;Data determine subelement, coordinate position for will acquire and Angle is as bone key point data.
Man-machine friendship provided by any embodiment of the invention can be performed in human-computer interaction device provided by the embodiment of the present invention Mutual method has the corresponding functional module of execution method and beneficial effect.
Embodiment six
Fig. 6 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention six provides.Fig. 6, which is shown, to be suitable for being used to Realize the block diagram of the exemplary computer device 612 of embodiment of the present invention.The computer equipment 612 that Fig. 6 is shown is only one A example, should not function to the embodiment of the present invention and use scope bring any restrictions.Computer equipment 612 can be used as one Kind terminal device.
As shown in fig. 6, computer equipment 612 is showed in the form of universal computing device.The component of computer equipment 612 can To include but is not limited to: one or more processor or processing unit 616, system storage 628 connect not homologous ray group The bus 618 of part (including system storage 628 and processing unit 616).
Bus 618 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Computer equipment 612 typically comprises a variety of computer system readable media.These media can be it is any can The usable medium accessed by computer equipment 612, including volatile and non-volatile media, moveable and immovable Jie Matter.
System storage 628 may include the computer system readable media of form of volatile memory, such as deposit at random Access to memory (RAM) 630 and/or cache memory 632.Computer equipment 612 may further include it is other it is removable/ Immovable, volatile/non-volatile computer system storage medium.Only as an example, storage system 634 can be used for reading Write immovable, non-volatile magnetic media (Fig. 6 do not show, commonly referred to as " hard disk drive ").Although being not shown in Fig. 6, The disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided, and non-easy to moving The CD drive that the property lost CD (such as CD-ROM, DVD-ROM or other optical mediums) is read and write.In these cases, each Driver can be connected by one or more data media interfaces with bus 618.System storage 628 may include at least One program product, the program product have one group of (for example, at least one) program module, these program modules are configured to hold The function of row various embodiments of the present invention.
Program/utility 640 with one group of (at least one) program module 642, can store and deposit in such as system In reservoir 628, such program module 642 include --- but being not limited to --- operating system, one or more application program, It may include the reality of network environment in other program modules and program data, each of these examples or certain combination It is existing.Program module 642 usually executes function and/or method in embodiment described in the invention.
Computer equipment 612 can also be with one or more external equipments 614 (such as keyboard, sensing equipment, display 624 etc.) it communicates, the equipment interacted with the computer equipment 612 communication can be also enabled a user to one or more, and/or (such as network interface card is adjusted with any equipment for enabling the computer equipment 612 to be communicated with one or more of the other calculating equipment Modulator-demodulator etc.) communication.This communication can be carried out by input/output (I/O) interface 622.Also, computer equipment 612 can also by network adapter 620 and one or more network (such as local area network (LAN), wide area network (WAN) and/or Public network, such as internet) communication.As shown, network adapter 620 passes through its of bus 618 and computer equipment 612 The communication of its module.It should be understood that although being not shown in Fig. 6, other hardware and/or soft can be used in conjunction with computer equipment 612 Part module, including but not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, Tape drive and data backup storage system etc..
Computer equipment 612 can be a kind of terminal device.The processing unit 616 of computer equipment 612 is deposited by operation The program in system storage 628 is stored up, thereby executing various function application and data processing, such as realizes that the present invention is implemented Man-machine interaction method provided by example.That is, when detecting that user enters interaction area, it is raw according to preset interaction style Round is interacted at least one;Under each interaction round, generates standard pose presentation and shown, and in the friendship acquired in real time The user images at least one matched user of interaction style, standard pose presentation and Standard User posture are obtained in mutual image Association;Identify active user's posture of user in each user images, and by active user's posture and Standard User posture into Row matching obtains interaction result corresponding with interaction round;According to interaction corresponding at least one interaction round as a result, generating As a result feedback information is shown.
Embodiment seven
The embodiment of the present invention seven additionally provides a kind of computer readable storage medium, is stored thereon with computer program, should The man-machine interaction method as provided by the embodiment of the present invention is realized when program is executed by processor, this method can specifically include: When detecting that user enters interaction area, according to preset interaction style, at least one interaction round is generated;In each interaction Under round, generates standard pose presentation and shown, and acquisition is matched with interaction style in the interaction figure picture acquired in real time The user images of at least one user, standard pose presentation are associated with Standard User posture;It identifies and uses in each user images Active user's posture at family, and active user's posture is matched with Standard User posture, it obtains corresponding with interaction round Interact result;It is shown according to interaction corresponding at least one interaction round as a result, generating result feedback information.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or Device, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: tool There are electrical connection, the portable computer diskette, hard disk, random access memory (RAM), read-only memory of one or more conducting wires (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, Ruby, Go further include conventional procedural programming language-such as " C " language or similar programming language.Program Code can be executed fully on the user computer, partly be executed on the user computer, as an independent software Packet executes, part executes on the remote computer or completely in remote computer or server on the user computer for part Upper execution.In situations involving remote computers, remote computer can pass through the network of any kind --- including local Net (LAN) or wide area network (WAN)-are connected to subscriber computer, or, it may be connected to outer computer (such as using because of spy Service provider is netted to connect by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (10)

1. a kind of man-machine interaction method characterized by comprising
When detecting that user enters interaction area, according to preset interaction style, at least one interaction round is generated;
Under each interaction round, generate standard pose presentation shown, and in the interaction figure picture acquired in real time obtain with The user images of at least one matched user of interaction style, the standard pose presentation are associated with Standard User posture;
Active user's posture of user is identified in each user images, and by active user's posture and the Standard User Posture is matched, and interaction result corresponding with interaction round is obtained;
It is shown according to interaction corresponding at least one described interaction round as a result, generating result feedback information.
2. the method according to claim 1, wherein the standard pose presentation is to include Standard User posture Picture, or the picture of the Standard User posture title comprising prompting user to do.
3. the method according to claim 1, wherein the interaction style includes: more people's battle modes;
Under each interaction round, generate standard pose presentation shown, and in the interaction figure picture acquired in real time obtain with The user images of at least one matched user of interaction style, comprising:
Under each interaction round, standard pose presentation and the participation number matched at least two with more people's battle modes are generated Image Acquisition prompting frame is shown that described image acquisition prompting frame needs position for prompting user during more people fight In image capture position;
According to image reference position corresponding with described image acquisition prompting frame in interaction figure picture, in the interaction figure picture acquired in real time In, obtain the user images with more matched at least two users of people's battle mode.
4. the method according to claim 1, wherein the interaction style includes: single break-through mode;
Under each interaction round, generate standard pose presentation shown, and in the interaction figure picture acquired in real time obtain with The user images of at least one matched user of interaction style, comprising:
Under each interaction round, generates standard pose presentation and an Image Acquisition prompting frame is shown that described image is adopted Collection prompting frame is for prompting user during single make a breakthrough, the image capture position for needing to be located at;
According to image reference position corresponding with described image acquisition prompting frame in interaction figure picture, in the interaction figure picture acquired in real time In, obtain the user images with a user of the single break-through pattern match.
5. according to the method described in claim 3, it is characterized in that, identifying the current use of user in each user images Family posture, and active user's posture is matched with the Standard User posture, it obtains interaction corresponding with interaction round and ties Fruit, comprising:
Active user's posture of user is identified in each user images, and respectively by active user's posture and the standard User's posture matches;
It according to matching result, gives a mark to each user images, obtains the interaction score of each user;
According to the user of interaction highest scoring, determine interaction corresponding with interaction round as a result, alternatively, according to the mutual of each user Numerical relation between dynamic score and preset fraction threshold value determines interaction result corresponding with interaction round.
6. according to the method described in claim 4, it is characterized in that, identifying the current use of user in each user images Family posture, and active user's posture is matched with the Standard User posture, it obtains interaction corresponding with interaction round and ties Fruit, comprising:
Active user's posture of user is identified in the user images, and by active user's posture and the Standard User posture It is matched;
It according to matching result, gives a mark to the user images, obtains the interaction score of user;
According to the numerical relation between the interaction score and preset fraction threshold value of the user, determine corresponding with interaction round mutual Dynamic result is that the check-in succeeds or failed.
7. the method according to claim 1, wherein identifying active user's appearance of user in the user images State, comprising:
Obtain bone key point data corresponding with the user images;
Active user's posture is determined according to the bone key point data.
8. the method according to the description of claim 7 is characterized in that obtaining bone key points corresponding with the character image According to, comprising:
Identify that the bone key point for including in the character image, the bone key point include following at least two: the crown, neck Sub, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle and right ankle;
Coordinate system is established by origin of the picture centre of the user images, obtains whole bone key points in the coordinate system Coordinate position;
Line is carried out at least two setting bone key points, and is obtained and at least one setting bone key point matched two Angle between bone key point line;
The coordinate position and the angle that will acquire are as the bone key point data.
9. a kind of human-computer interaction device characterized by comprising
Round generation module is interacted, for according to preset interaction style, generating extremely when detecting that user enters interaction area A few interaction round;
User images obtain module, are shown under each interaction round, generating standard pose presentation, and adopt in real time The user images at least one matched user of the interaction style, the standard pose presentation are obtained in the interaction figure picture of collection It is associated with Standard User posture;
It interacts result and obtains module, for identifying active user's posture of user in each user images, and will be current User's posture is matched with the Standard User posture, obtains interaction result corresponding with interaction round;
Result display module, for being interacted according to corresponding at least one described interaction round as a result, generating result feedback letter Breath is shown.
10. a kind of terminal device characterized by comprising
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Existing man-machine interaction method a method as claimed in any one of claims 1-8.
CN201910290169.8A 2019-04-11 2019-04-11 A kind of man-machine interaction method, device and equipment Pending CN110007765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290169.8A CN110007765A (en) 2019-04-11 2019-04-11 A kind of man-machine interaction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290169.8A CN110007765A (en) 2019-04-11 2019-04-11 A kind of man-machine interaction method, device and equipment

Publications (1)

Publication Number Publication Date
CN110007765A true CN110007765A (en) 2019-07-12

Family

ID=67171161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290169.8A Pending CN110007765A (en) 2019-04-11 2019-04-11 A kind of man-machine interaction method, device and equipment

Country Status (1)

Country Link
CN (1) CN110007765A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627115A (en) * 2020-05-26 2020-09-04 浙江商汤科技开发有限公司 Interactive group photo method and device, interactive device and computer storage medium
CN111666844A (en) * 2020-05-26 2020-09-15 电子科技大学 Badminton player motion posture assessment method
CN117193534A (en) * 2023-09-13 2023-12-08 北京小米机器人技术有限公司 Motion interaction method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615288A (en) * 2015-03-28 2018-01-19 英特尔公司 Attitude matching mechanism
CN108304762A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of human body attitude matching process and its equipment, storage medium, terminal
CN108351707A (en) * 2017-12-22 2018-07-31 深圳前海达闼云端智能科技有限公司 Man-machine interaction method and device, terminal equipment and computer readable storage medium
CN108371814A (en) * 2018-01-04 2018-08-07 乐蜜有限公司 Implementation method, device, electronic equipment and the storage medium of more human body sense dancings
CN108509047A (en) * 2018-03-29 2018-09-07 北京微播视界科技有限公司 Act matching result determining device, method, readable storage medium storing program for executing and interactive device
CN108519822A (en) * 2018-03-29 2018-09-11 北京微播视界科技有限公司 Action matching system, method, storage medium and interactive device based on human-computer interaction
CN108983956A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 Body feeling interaction method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615288A (en) * 2015-03-28 2018-01-19 英特尔公司 Attitude matching mechanism
CN108304762A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of human body attitude matching process and its equipment, storage medium, terminal
CN108983956A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 Body feeling interaction method and device
CN108351707A (en) * 2017-12-22 2018-07-31 深圳前海达闼云端智能科技有限公司 Man-machine interaction method and device, terminal equipment and computer readable storage medium
CN108371814A (en) * 2018-01-04 2018-08-07 乐蜜有限公司 Implementation method, device, electronic equipment and the storage medium of more human body sense dancings
CN108509047A (en) * 2018-03-29 2018-09-07 北京微播视界科技有限公司 Act matching result determining device, method, readable storage medium storing program for executing and interactive device
CN108519822A (en) * 2018-03-29 2018-09-11 北京微播视界科技有限公司 Action matching system, method, storage medium and interactive device based on human-computer interaction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627115A (en) * 2020-05-26 2020-09-04 浙江商汤科技开发有限公司 Interactive group photo method and device, interactive device and computer storage medium
CN111666844A (en) * 2020-05-26 2020-09-15 电子科技大学 Badminton player motion posture assessment method
CN117193534A (en) * 2023-09-13 2023-12-08 北京小米机器人技术有限公司 Motion interaction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108776773B (en) Three-dimensional gesture recognition method and interaction system based on depth image
CN110349081B (en) Image generation method and device, storage medium and electronic equipment
EP4002290A1 (en) Three-dimensional facial model generation method and apparatus, computer device and storage medium
CN110007765A (en) A kind of man-machine interaction method, device and equipment
CN106652590B (en) Teaching method, teaching identifier and tutoring system
CN107680019A (en) A kind of implementation method of Examination Scheme, device, equipment and storage medium
CN111598051B (en) Face verification method, device, equipment and readable storage medium
CN110363129B (en) Early autism screening system based on smiling paradigm and audio-video behavior analysis
CN109308438A (en) Method for building up, electronic equipment, the storage medium in action recognition library
CN105447480A (en) Face recognition game interactive system
CN105022470A (en) Method and device of terminal operation based on lip reading
CN109828660B (en) Method and device for controlling application operation based on augmented reality
CN105975072A (en) Method, device and system for identifying gesture movement
CN110298220A (en) Action video live broadcasting method, system, electronic equipment, storage medium
WO2020252918A1 (en) Human body-based gesture recognition method and apparatus, device, and storage medium
CN112749684A (en) Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN109272473A (en) A kind of image processing method and mobile terminal
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
CN111383642A (en) Voice response method based on neural network, storage medium and terminal equipment
CN115227234A (en) Cardiopulmonary resuscitation pressing action evaluation method and system based on camera
CN108815845B (en) The information processing method and device of human-computer interaction, computer equipment and readable medium
CN111223549A (en) Mobile end system and method for disease prevention based on posture correction
CN114187651A (en) Taijiquan training method and system based on mixed reality, equipment and storage medium
CN117037277A (en) Assessment method, device and system for AED emergency training students and storage medium
CN112230777A (en) Cognitive training system based on non-contact interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190712

RJ01 Rejection of invention patent application after publication