CN111045510A - Man-machine interaction method and system based on augmented reality - Google Patents
Man-machine interaction method and system based on augmented reality Download PDFInfo
- Publication number
- CN111045510A CN111045510A CN201811194949.4A CN201811194949A CN111045510A CN 111045510 A CN111045510 A CN 111045510A CN 201811194949 A CN201811194949 A CN 201811194949A CN 111045510 A CN111045510 A CN 111045510A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- body language
- voice
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000003993 interaction Effects 0.000 title claims abstract description 48
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 36
- 230000004044 response Effects 0.000 claims abstract description 58
- 230000009471 action Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 9
- 230000001360 synchronised effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- RWSOTUBLDIXVET-UHFFFAOYSA-N Dihydrogen sulfide Chemical group S RWSOTUBLDIXVET-UHFFFAOYSA-N 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a man-machine interaction method and a system based on augmented reality, wherein the method comprises the following steps: acquiring voice information and/or body language information of a user; and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user. According to the human-computer interaction method and system based on augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, the user experience is more real, and therefore the utilization rate of the human-computer interaction system is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction method and system based on augmented reality.
Background
The business hall is the main service window for users to handle business, and with the continuous expansion of business, the contradiction between limited service personnel and the increasing user requirements is continuously deepened. Every time the peak period of business handling at the end of the month and in the beginning of the month, a large number of users queue up, and long-time waiting causes great pressure on service personnel and great inconvenience to the users. Due to the intelligent machine equipment, a user can transact part of services through the intelligent machine equipment, and the working pressure of service personnel is relieved.
In the prior art, an intelligent robot is designed by simulating the image of a real person, is placed at an entrance of a business hall, and sends welcome words by acquiring infrared information of a human body in real time. Carrying out face recognition on a client, and comparing the face recognition with sample information in a database; if the customer is exposed for the first time, the user needs to manually input personal information, the face and the personal information are memorized, and if the customer is exposed for the second time, the personal information of the customer and the last time of exposure are displayed; the customer requirements are queried by voice, services are recommended to the customer according to the customer requirements, and the user is guided to a service window.
In the man-machine interaction mode in the prior art, a user faces an intelligent robot, the mode of interaction between the intelligent robot and the user is single, only voice information can be output, the shape of the mouth of the user is changed, and simple mechanical actions are completed.
Disclosure of Invention
It is an object of embodiments of the present invention to provide an augmented reality based human-computer interaction method and system that overcomes or at least partially solves the above mentioned problems.
In order to solve the above technical problem, in one aspect, an embodiment of the present invention provides a human-computer interaction method based on augmented reality, including:
acquiring voice information and/or body language information of a user;
and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
In another aspect, an embodiment of the present invention provides a human-computer interaction system based on augmented reality, including:
the acquisition device is used for acquiring voice information and/or body language information of a user;
and the output device is used for outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, and the first response information is used for responding to the voice information and/or the body language information of the user.
In another aspect, an embodiment of the present invention provides an electronic device, including:
the processor and the memory are communicated with each other through a bus; the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the methods described above.
In yet another aspect, the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program implementing the above method when executed by a processor.
According to the human-computer interaction method and system based on augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, the user experience is more real, and therefore the utilization rate of the human-computer interaction system is improved.
Drawings
Fig. 1 is a schematic diagram of a human-computer interaction method based on augmented reality according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a human-computer interaction system based on augmented reality according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of a human-computer interaction method based on augmented reality according to an embodiment of the present invention, and as shown in fig. 1, an embodiment of the present invention provides a human-computer interaction method based on augmented reality, an execution subject of the human-computer interaction method is a human-computer interaction system based on augmented reality (referred to as "the system" for short), and the method includes:
s101, acquiring voice information and/or body language information of a user;
step S102, outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
Specifically, the augmented reality-based human-computer interaction system according to the embodiment of the invention includes an acquisition device and an output device. The acquisition device comprises a voice collector and a video collector, the output device comprises a display and a voice output device, the voice collector can use a microphone, the video collector can use a camera, a display screen of the display can use a touch display screen, and the voice output device can use a loudspeaker. The augmented reality-based human-computer interaction system can be applied to various scenes, such as business halls, shopping malls, hotel lobbies, company fronts and the like.
First, voice information and/or body language information of a user are acquired through an acquisition device. For example, a sentence spoken by the user is collected by a microphone, or a body language of the user is collected by a camera, and the body language includes various motions of human body parts such as the head, eyes, neck, hands, elbows, arms, body, crotch, and feet of the user and also includes facial expressions of the user.
And then, outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user through an output device, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
For example, the man-machine interactive system based on augmented reality is applied to a business hall of a communication service provider, a preset virtual 3D character is displayed in a display of the system, when a user talks with the system and wants to inquire the account balance of the mobile phone of the user, the user speaks a sentence of 'i want to check the account balance of a mobile phone' to the system, the system collects the sentence spoken by the user through a microphone, if the system inquires that the current balance of the user is 60 yuan according to the sentence spoken by the user, a sentence of voice of 'your current balance is 60 yuan' is output through a loudspeaker, and the voice is output, and simultaneously, the body language of the virtual 3D character in the display is synchronized with the sentence of voice, for example, the mouth shape corresponds to the voice so as to respond to the inquiry of the user.
When a user finishes transacting business and wants to leave a business hall of a communication service provider, the user wants to tell the system, the user performs a command-swiping and identity-swiping action facing the system, the system collects the body language expressed by the user through the camera, and according to the body language expressed by the user, the virtual 3D character in the display also performs the command-swiping and identity-swiping action so as to respond to the user's identity-swiping action. In addition, when the virtual 3D character does the action of waving to tell another, a voice of 'thank you, goodbye, welcome next time' can be output through the loudspeaker.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
On the basis of the foregoing embodiment, further, before the acquiring the voice information and/or the body language information of the user, the method further includes:
according to the face image information of the user, the identity of the user is identified, and the identity information of the user is obtained;
acquiring a consumption record and/or a business handling record of the user according to the identity information of the user;
inputting the consumption record and/or the business handling record of the user into a preset algorithm model, and outputting the business most possibly handled by the user at this time as a target business;
and outputting second response information through the virtual 3D character, wherein the second response information is used for greeting the user, and the second response information comprises a question for inquiring the user whether the target service needs to be transacted.
Specifically, the system needs to recognize the user before communicating with the system, and can actively greet the user after recognizing the user, so that the system is more enthusiastic and more intimate like a real person.
Firstly, the system acquires a face image of a user, identifies the identity of the user according to the face image information of the user, and acquires identity information of the user, wherein the identity information of the user can comprise the name, the gender, the number of an identity card (passport), an address, a contact way and the like of the user.
Then, according to the identity information of the user, the consumption record and/or the business transaction record of the user are/is obtained through inquiry.
And inputting the consumption record and/or the business handling record of the user into a preset algorithm model, and outputting the business most possibly handled by the user at this time as a target business.
And finally, outputting second response information through the virtual 3D character in the output device of the system, wherein the second response information is used for greeting the user, and the second response information comprises a question for inquiring the user whether the target service needs to be transacted.
For example, the man-machine interaction system based on augmented reality is applied to a business hall of a communication service provider, when a user just comes to the entrance of the business hall, the system collects a face image of the user through a camera, processes the face image of the user, extracts an image characteristic value, matches sample image information stored in a database, identifies the identity of the user, determines the user as an old user if matching is successful, and acquires identity information of the user, such as name, gender, identification card (passport) number, address, telephone number and the like. And if the matching fails, judging the user as a new user, wherein the new user is the first time to come into a business hall and does not transact any business.
If the user is an old user, according to the identity information of the old user, acquiring a consumption record and/or a service handling record of the old user through inquiry, for example, the last service handling time, the last service handling type, the last consultation or complaint time, the last consultation service type, whether the business hall is reached every month, whether a call fee inquiry record exists, whether the mobile phone of the user is started up currently, the user internet surfing fee over the month, the user internet surfing habits (videos, webpages, mobile phone internet surfing, broadband internet surfing), the local call duration, the foreign call duration and the like.
And inputting the consumption record and/or the business transaction record of the old user into a preset algorithm model, and outputting the business most possibly transacted by the old user at this time as the target business. For example, the service most likely to be transacted by the old user is predicted by using a C4.5 algorithm model. If an old user goes to the business hall every month to inquire the current balance of the mobile phone, the service most possibly transacted at this time is likely to inquire the current balance of the mobile phone.
After the user is identified as an old user, the consumption record and/or the business transaction record of the old user are inquired, and the most probable business transaction target of the old user is determined to be the inquiry of the current balance of the mobile phone, the virtual 3D person in the display performs a bow welcome action, and a voice of 'good and welcome' is output through the loudspeaker while the virtual 3D person performs the bow welcome action. To greet the old user indicating a welcome to the old user. In addition, a "ask, do you inquire about the current balance of the mobile phone? "is a sentence of voice. Thereby making the user feel more enthusiastic and intimate.
If the user is a new user, the virtual 3D character in the display makes a bow action and a voice of "hello, welcome" is output through the speaker while the virtual 3D character makes the bow action. To greet the old user indicating a welcome to the old user. In addition, "ask for a question, what business you want to transact? "is a sentence of voice. Thereby making the user feel more enthusiastic and intimate.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
On the basis of the foregoing embodiments, further, each of the first response information and the second response information includes at least any one of speech information and body language information.
Specifically, the system may be configured to obtain the voice information and/or the body language information of the user, and output the first response information through a preset virtual 3D character, where the first response information is the voice information, the body language information, or a synchronous superposition of the voice information and the body language information.
For example, the man-machine interactive system based on augmented reality is applied to a business hall of a communication service provider, a preset virtual 3D character is displayed in a display of the system, when a user talks with the system and wants to inquire the account balance of the mobile phone of the user, the user speaks a sentence of 'i want to check the account balance of a mobile phone' to the system, the system collects the sentence spoken by the user through a microphone, if the system inquires that the current balance of the user is 60 yuan according to the sentence spoken by the user, a sentence of voice of 'your current balance is 60 yuan' is output through a loudspeaker, and the voice is output, and simultaneously, the body language of the virtual 3D character in the display is synchronized with the sentence of voice, for example, the mouth shape corresponds to the voice so as to respond to the inquiry of the user.
When a user finishes transacting business and wants to leave a business hall of a communication service provider, the user wants to tell the system, the user performs a command-swiping and identity-swiping action facing the system, the system collects the body language expressed by the user through the camera, and according to the body language expressed by the user, the virtual 3D character in the display also performs the command-swiping and identity-swiping action so as to respond to the user's identity-swiping action. In addition, when the virtual 3D character does the action of waving to tell another, a voice of 'thank you, goodbye, welcome next time' can be output through the loudspeaker.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
On the basis of the foregoing embodiments, further, the outputting, by a preset virtual 3D character, first response information according to the voice information and/or the body language information of the user specifically includes:
analyzing the voice information and/or the body language information of the user to acquire the voice semantics and/or the body language semantics of the user;
retrieving response information corresponding to the voice semantics and/or the body language semantics of the user from a preset knowledge base as the first response information;
outputting the first response information through the virtual 3D character.
Specifically, the specific step of outputting the first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user includes the steps of:
firstly, analyzing voice information and/or body language information of a user, and acquiring voice semantics and/or body language semantics of the user, namely, converting the voice information into corresponding characters through recognition processing of the voice information, and expressing the semantics of the voice information by using the corresponding characters; the body language information can be converted into corresponding characters through identification processing of the body language information, and the corresponding characters are used for representing the semantics of the body language information.
Then, according to the characters corresponding to the voice information and/or the characters corresponding to the body language information, corresponding response information is retrieved from a preset knowledge base as response information of a response user, enough one-to-one corresponding relations between the character information and the response information are stored in the knowledge base in advance, and based on a certain character, retrieval is carried out, so that the response information corresponding to the character can be obtained.
And finally, outputting response information of the user responding to the user through the virtual 3D character.
For example, the man-machine interaction system based on augmented reality is applied to a business hall of a communication service provider, a preset virtual 3D character is displayed in a display of the system, when a user has a conversation with the system and wants to inquire the account balance of a mobile phone of the user, the user speaks a sentence of 'i want to check the account balance of the mobile phone' to the system, the system collects the sentence of the user speaking through a microphone, identifies the sentence, and converts the sentence of 'i want to check the account balance of the mobile phone' into a plurality of characters of 'i want to check the account balance of the mobile phone'.
Then, according to the words that 'I want to find the account balance of the mobile phone', response information corresponding to the words is inquired from a preset knowledge base, and after inquiry, the contents corresponding to the words are words that 'your current balance is 60 yuan', and a mouth shape control instruction synchronized with the voice.
Finally, the words "your current balance is 60 yuan" are converted into the voice of "your current balance is 60 yuan", and the voice is output through a loudspeaker. And controlling the mouth shape of the virtual 3D character in the display according to the mouth shape control instruction of the voice synchronization while outputting the voice, and finishing the synchronous output of the mouth shape and the voice. Thereby making the user feel more lifelike and intimate.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
On the basis of the above embodiments, further, the first response information includes a demonstration to the user of an action to be performed by the user.
Specifically, the response information of the virtual 3D character includes an action to be performed by the user for demonstration to the user, so that the user performs business transaction according to the demonstration, the business transaction efficiency of the user is improved, and the waiting time of the user is saved.
For example, if a user has to go to a counter to handle a business, the virtual 3D character assists in making an appointment for the client, prompting the user direction with a gesture, simultaneously, virtualizing a 3D map of the business hall on a display screen, simulating the whole process of handling the business by the virtual 3D character, enabling the user to know a window of handling the business, a route to the window, an operation process of handling the business, and the like through the display screen, and enabling the user to become familiar with the behavior of the user to smoothly reach the counter position of handling the business to handle the business.
In addition, the user can browse the 3D map displayed in the large-screen display screen through voice interaction, knows interested positions such as a toilet position and an escape exit position, and is convenient for the user to quickly reach the interested positions under special conditions.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
On the basis of the above embodiments, further, the background of the virtual 3D character is a real scene acquired in real time.
In particular, the output device of the system comprises a display for displaying a virtual 3D character. The first video collector is arranged in front of the display and used for collecting face image information of a user and body language information of the user, the second video collector is arranged behind the display and used for collecting images of a real scene behind the display and superposing a virtual 3D character to the images of the real scene, and therefore combination of the virtual character and the real scene is achieved, and the user feels more vivid and more intimate.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
In addition to the above embodiments, the body language information of the user is image information of the body language of the user.
In particular, the output device of the system comprises a display for displaying a virtual 3D character. The first video collector is arranged in front of the display and used for collecting face image information of a user and body language information of the user, the body language information of the user is the image information of the body language of the user collected through the first video collector, the characteristic value of the image is obtained by processing the image containing the body language of the user, and the body language semantics of the body language information of the user is analyzed.
According to the man-machine interaction method based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of a man-machine interaction system is improved.
Fig. 2 is a schematic view of a human-computer interaction system based on augmented reality according to an embodiment of the present invention, and as shown in fig. 2, an embodiment of the present invention provides a human-computer interaction system based on augmented reality, which is used for executing the method described in any of the above embodiments, and specifically includes an obtaining device 201 and an output device 202, where:
the acquiring device 201 is used for acquiring voice information and/or body language information of a user; the output device 202 is configured to output first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, where the first response information is used to respond to the voice information and/or the body language information of the user.
Specifically, the augmented reality-based human-computer interaction system according to the embodiment of the present invention includes an obtaining device 201 and an output device 202. The acquisition device 201 comprises a voice collector and a video collector, the output device 202 comprises a display and a voice output device, the voice collector can use a microphone, the video collector can use a camera, a display screen of the display can use a touch display screen, and the voice output device can use a loudspeaker. The augmented reality-based human-computer interaction system can be applied to various scenes, such as business halls, shopping malls, hotel lobbies, company fronts and the like.
First, voice information and/or body language information of the user is acquired by the acquisition means 201. For example, a sentence spoken by the user is collected by a microphone, or a body language of the user is collected by a camera, and the body language includes various motions of human body parts such as the head, eyes, neck, hands, elbows, arms, body, crotch, and feet of the user and also includes facial expressions of the user.
Then, the first response information is output through the preset virtual 3D character according to the voice information and/or the body language information of the user through the output device 202, and the first response information is used for responding to the voice information and/or the body language information of the user.
For example, the man-machine interactive system based on augmented reality is applied to a business hall of a communication service provider, a preset virtual 3D character is displayed in a display of the system, when a user talks with the system and wants to inquire the account balance of the mobile phone of the user, the user speaks a sentence of 'i want to check the account balance of a mobile phone' to the system, the system collects the sentence spoken by the user through a microphone, if the system inquires that the current balance of the user is 60 yuan according to the sentence spoken by the user, a sentence of voice of 'your current balance is 60 yuan' is output through a loudspeaker, and the voice is output, and simultaneously, the body language of the virtual 3D character in the display is synchronized with the sentence of voice, for example, the mouth shape corresponds to the voice so as to respond to the inquiry of the user.
When a user finishes transacting business and wants to leave a business hall of a communication service provider, the user wants to tell the system, the user performs a command-swiping and identity-swiping action facing the system, the system collects the body language expressed by the user through the camera, and according to the body language expressed by the user, the virtual 3D character in the display also performs the command-swiping and identity-swiping action so as to respond to the user's identity-swiping action. In addition, when the virtual 3D character does the action of waving to tell another, a voice of 'thank you, goodbye, welcome next time' can be output through the loudspeaker.
Embodiments of the present invention provide an augmented reality-based human-computer interaction system, configured to execute the method described in any one of the above embodiments, where specific steps of executing the method described in one of the above embodiments by using the system provided in this embodiment are the same as those in the corresponding embodiment described above, and are not described here again.
According to the human-computer interaction system based on the augmented reality, the virtual 3D character is used for simulating the real person to communicate with the user, so that the user experience is more real, and the utilization rate of the human-computer interaction system is improved.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device includes: a processor 301, a memory 302, and a bus 303;
the processor 301 and the memory 302 complete communication with each other through the bus 303;
acquiring voice information and/or body language information of a user;
and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, enable the computer to perform the methods provided by the above-mentioned method embodiments, for example, including:
acquiring voice information and/or body language information of a user;
and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
Embodiments of the present invention provide a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to perform the methods provided by the above method embodiments, for example, the methods include:
acquiring voice information and/or body language information of a user;
and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the apparatuses and devices are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A human-computer interaction method based on augmented reality is characterized by comprising the following steps:
acquiring voice information and/or body language information of a user;
and outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, wherein the first response information is used for responding to the voice information and/or the body language information of the user.
2. The method according to claim 1, wherein before the obtaining the voice information and/or the body language information of the user, further comprising:
according to the face image information of the user, the identity of the user is identified, and the identity information of the user is obtained;
acquiring a consumption record and/or a business handling record of the user according to the identity information of the user;
inputting the consumption record and/or the business handling record of the user into a preset algorithm model, and outputting the business most possibly handled by the user at this time as a target business;
and outputting second response information through the virtual 3D character, wherein the second response information is used for greeting the user, and the second response information comprises a question for inquiring the user whether the target service needs to be transacted.
3. The method according to claim 2, wherein each of the first response information and the second response information includes at least any one of speech information and body language information.
4. The method according to claim 1, wherein outputting the first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user specifically comprises:
analyzing the voice information and/or the body language information of the user to acquire the voice semantics and/or the body language semantics of the user;
retrieving response information corresponding to the voice semantics and/or the body language semantics of the user from a preset knowledge base as the first response information;
outputting the first response information through the virtual 3D character.
5. The method of claim 1, wherein the first response information comprises a demonstration to the user of an action to be performed by the user.
6. The method of claim 1, wherein the background of the virtual 3D character is a real scene captured in real time.
7. The method according to claim 1, wherein the body language information of the user is image information of the body language of the user.
8. An augmented reality based human-computer interaction system, comprising:
the acquisition device is used for acquiring voice information and/or body language information of a user;
and the output device is used for outputting first response information through a preset virtual 3D character according to the voice information and/or the body language information of the user, and the first response information is used for responding to the voice information and/or the body language information of the user.
9. An electronic device, comprising:
the processor and the memory are communicated with each other through a bus; the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 7.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811194949.4A CN111045510A (en) | 2018-10-15 | 2018-10-15 | Man-machine interaction method and system based on augmented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811194949.4A CN111045510A (en) | 2018-10-15 | 2018-10-15 | Man-machine interaction method and system based on augmented reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111045510A true CN111045510A (en) | 2020-04-21 |
Family
ID=70230616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811194949.4A Pending CN111045510A (en) | 2018-10-15 | 2018-10-15 | Man-machine interaction method and system based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111045510A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111599359A (en) * | 2020-05-09 | 2020-08-28 | 标贝(北京)科技有限公司 | Man-machine interaction method, server, client and storage medium |
CN113538645A (en) * | 2021-07-19 | 2021-10-22 | 北京顺天立安科技有限公司 | Method and device for matching body movement and language factor of virtual image |
WO2022161289A1 (en) * | 2021-01-28 | 2022-08-04 | 腾讯科技(深圳)有限公司 | Identity information display method and apparatus, and terminal, server and storage medium |
CN116129083A (en) * | 2022-12-23 | 2023-05-16 | 中科计算技术西部研究院 | Park management system and method based on meta universe |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013139181A1 (en) * | 2012-03-19 | 2013-09-26 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN106502424A (en) * | 2016-11-29 | 2017-03-15 | 上海小持智能科技有限公司 | Based on the interactive augmented reality system of speech gestures and limb action |
CN107146622A (en) * | 2017-06-16 | 2017-09-08 | 合肥美的智能科技有限公司 | Refrigerator, voice interactive system, method, computer equipment, readable storage medium storing program for executing |
CN107918913A (en) * | 2017-11-20 | 2018-04-17 | 中国银行股份有限公司 | Banking processing method, device and system |
CN108257600A (en) * | 2016-12-29 | 2018-07-06 | ***通信集团浙江有限公司 | Method of speech processing and device |
CN108257218A (en) * | 2018-01-17 | 2018-07-06 | 北京网信云服信息科技有限公司 | Information interactive control method, device and equipment |
-
2018
- 2018-10-15 CN CN201811194949.4A patent/CN111045510A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013139181A1 (en) * | 2012-03-19 | 2013-09-26 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN106502424A (en) * | 2016-11-29 | 2017-03-15 | 上海小持智能科技有限公司 | Based on the interactive augmented reality system of speech gestures and limb action |
CN108257600A (en) * | 2016-12-29 | 2018-07-06 | ***通信集团浙江有限公司 | Method of speech processing and device |
CN107146622A (en) * | 2017-06-16 | 2017-09-08 | 合肥美的智能科技有限公司 | Refrigerator, voice interactive system, method, computer equipment, readable storage medium storing program for executing |
CN107918913A (en) * | 2017-11-20 | 2018-04-17 | 中国银行股份有限公司 | Banking processing method, device and system |
CN108257218A (en) * | 2018-01-17 | 2018-07-06 | 北京网信云服信息科技有限公司 | Information interactive control method, device and equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111599359A (en) * | 2020-05-09 | 2020-08-28 | 标贝(北京)科技有限公司 | Man-machine interaction method, server, client and storage medium |
WO2022161289A1 (en) * | 2021-01-28 | 2022-08-04 | 腾讯科技(深圳)有限公司 | Identity information display method and apparatus, and terminal, server and storage medium |
CN113538645A (en) * | 2021-07-19 | 2021-10-22 | 北京顺天立安科技有限公司 | Method and device for matching body movement and language factor of virtual image |
CN116129083A (en) * | 2022-12-23 | 2023-05-16 | 中科计算技术西部研究院 | Park management system and method based on meta universe |
CN116129083B (en) * | 2022-12-23 | 2023-09-26 | 中科计算技术西部研究院 | Park management system and method based on meta universe |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111045510A (en) | Man-machine interaction method and system based on augmented reality | |
CN107704169B (en) | Virtual human state management method and system | |
CN111460112A (en) | Online customer service consultation method, device, medium and electronic equipment | |
CN109194568A (en) | Robot customer service and intelligent customer service system | |
CN102394915B (en) | System and method for providing guide service of self-service terminal | |
WO2013018731A1 (en) | Counselling system, counselling device, and client terminal | |
CN111131005A (en) | Dialogue method, device, equipment and storage medium of customer service system | |
CN113536007A (en) | Virtual image generation method, device, equipment and storage medium | |
US20140214622A1 (en) | Product information providing system, product information providing device, and product information outputting device | |
CN109327614B (en) | Global simultaneous interpretation mobile phone and method | |
CN111291151A (en) | Interaction method and device and computer equipment | |
CN111260509A (en) | Intelligent ordering service system and method | |
CN113703585A (en) | Interaction method, interaction device, electronic equipment and storage medium | |
CN112929253A (en) | Virtual image interaction method and device | |
CN111176442A (en) | Interactive government affair service system and method based on VR virtual reality technology | |
CN112990043A (en) | Service interaction method and device, electronic equipment and storage medium | |
CN108415561A (en) | Gesture interaction method based on visual human and system | |
CN114186045A (en) | Artificial intelligence interactive exhibition system | |
CN113850898A (en) | Scene rendering method and device, storage medium and electronic equipment | |
CN114064943A (en) | Conference management method, conference management device, storage medium and electronic equipment | |
CN110992958B (en) | Content recording method, content recording apparatus, electronic device, and storage medium | |
CN107783650A (en) | A kind of man-machine interaction method and device based on virtual robot | |
CN117194625A (en) | Intelligent dialogue method and device for digital person, electronic equipment and storage medium | |
US10984229B2 (en) | Interactive sign language response system and method | |
JP2020160641A (en) | Virtual person selection device, virtual person selection system and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200421 |