CN107831905A - A kind of virtual image exchange method and system based on line holographic projections equipment - Google Patents
A kind of virtual image exchange method and system based on line holographic projections equipment Download PDFInfo
- Publication number
- CN107831905A CN107831905A CN201711231944.XA CN201711231944A CN107831905A CN 107831905 A CN107831905 A CN 107831905A CN 201711231944 A CN201711231944 A CN 201711231944A CN 107831905 A CN107831905 A CN 107831905A
- Authority
- CN
- China
- Prior art keywords
- virtual image
- mobile device
- modal
- data
- imaging device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of the virtual image exchange method and system based on line holographic projections equipment that the application provides, wherein, methods described includes:The virtual image is run in the mobile device, and the virtual image possesses default image characteristics and preset attribute, multi-modal input data is obtained by mobile device and/or imaging device, the multi-modal input data is parsed by cloud server and obtains analysis result, multi-modal output data is determined according to the analysis result, the mobile device controls the output of the multi-modal output data, and the virtual image is shown by the imaging device, virtual image is presented using line holographic projections equipment, the third dimension of virtual image is stronger, realize smooth multi-modal man-machine interaction, greatly enhance Consumer's Experience.
Description
Technical field
The application is related to field of artificial intelligence, more particularly to a kind of virtual image interaction based on line holographic projections equipment
Method and system, a kind of virtual image and a kind of computer-readable recording medium.
Background technology
It is profit that line holographic projections technology (front-projected holographic display), which is also referred to as virtual image technology,
With interference and the technology of diffraction principle record and the real 3-D view of reconstructed object.Line holographic projections technology can not only produce vertical
The aerial mirage of body, mirage can also be made to be produced with performing artist interactive, complete performance together, produce shocking performance effect
Fruit.
With the continuous development of artificial intelligence, virtual robot also conform to the principle of simplicity the multiple mechanical action of substance be also promoted to intend
People's question and answer, independence and the intelligent robot interacted with other robot.
Virtual robot includes the tangible machine people for possessing entity and the virtual robot being mounted on hardware device.But
It is that current virtual robot can not carry out multi-modal interaction, while real 3-D view is presented.
Therefore, how line holographic projections technology to be combined with artificial intelligence, the interaction of virtual robot is lifted with this
Ability and presentation ability, it is the major issue of present urgent need to resolve.
The content of the invention
In view of this, the application provides a kind of virtual image exchange method and system based on line holographic projections equipment, one kind
Virtual image and a kind of computer-readable recording medium, to solve technological deficiency present in prior art.
On the one hand, the application provides a kind of virtual image exchange method based on line holographic projections equipment, the virtual image
Run in the mobile device, and the virtual image possesses default image characteristics and preset attribute, methods described includes:
Multi-modal input data is obtained by mobile device and/or imaging device;
The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing includes:Natural language
Say understanding, visually-perceptible, touch perception, language voice output and affection computation;
Multi-modal output data is determined according to the analysis result;
The mobile device controls the output of the multi-modal output data, and by the imaging device show it is described virtually
Image.
Alternatively, the mobile device aligns with imaging device physical location reference, to realize the mobile device
Interconnected with the signal of the imaging device.
Alternatively, the mobile device controls the output of the multi-modal output data to include:
The mobile device controls the virtual image of the virtual image that institute is presented with nozzle type, expression and/or limb action
State multi-modal output data.
Alternatively, the imaging device shows that the virtual image includes:
The imaging device shows the virtual image based on the multi-modal output data.
Alternatively, the multi-modal input data includes:
Speech data, text data, touch data, vision data and environmental data.
Alternatively, determine that multi-modal output data includes according to the analysis result:
When the analysis result is that technical ability is shown, the multi-modal output data of the virtual image includes singing songses number
According to and/or nautch data.
On the other hand, the application also provides a kind of virtual image interactive system based on line holographic projections equipment, including movement
Equipment, imaging device and cloud server, the virtual image are run in the mobile device, and the virtual image possess it is pre-
If image characteristics and preset attribute, wherein,
The mobile device and/or the imaging device obtain multi-modal input data;
The cloud server parses the multi-modal input data and obtains analysis result, and the parsing includes:It is natural
Language understanding, visually-perceptible, touch perception, language voice output and affection computation;
The mobile device determines multi-modal output data according to the analysis result;
The mobile device controls the output of the multi-modal output data, and by the imaging device show it is described virtually
Image.
Alternatively, the mobile device can also control the virtual image of the virtual image with nozzle type, expression and/or limb
The multi-modal output data is presented in body action.
On the other hand, the application also provides a kind of virtual image, and the virtual image is run in mobile device, and by being imaged
Equipment shows the virtual image;
The mobile device and the imaging device physical location with reference to aliging, with realize the mobile device with it is described into
As the signal of equipment interconnects;
The virtual image performs the step of above-mentioned virtual image exchange method based on line holographic projections equipment.
On the other hand, the application also provides a kind of computer-readable recording medium, and it is stored with computer program, the program
The step of above-mentioned virtual image exchange method based on line holographic projections equipment is realized when being executed by processor.
A kind of virtual image exchange method and system based on line holographic projections equipment, a kind of virtual image of the application offer
And a kind of computer-readable recording medium, wherein, methods described includes:The virtual image is run in the mobile device, and
The virtual image possesses default image characteristics and preset attribute, is obtained by mobile device and/or imaging device multi-modal defeated
Enter data, the multi-modal input data is parsed by cloud server and obtains analysis result, determined according to the analysis result
Multi-modal output data, the mobile device control the output of the multi-modal output data, and are shown by the imaging device
The virtual image so that virtual image can use line holographic projections equipment to present, and the third dimension of virtual image is stronger, realize stream
Smooth multi-modal man-machine interaction, greatly enhances Consumer's Experience.
Brief description of the drawings
Fig. 1 is the structural representation of the imaging device of the embodiment of the application one;
Fig. 2 is a kind of structural representation of virtual image interactive system based on line holographic projections equipment of the embodiment of the application one
Figure;
Fig. 3 is a kind of virtual image exchange method flow chart based on line holographic projections equipment of the embodiment of the application one;
Fig. 4 is a kind of virtual image exchange method flow chart based on line holographic projections equipment of the embodiment of the application one;
Fig. 5 is a kind of virtual image exchange method flow chart based on line holographic projections equipment of the embodiment of the application one;
Fig. 6 is a kind of virtual image exchange method flow chart based on line holographic projections equipment of the embodiment of the application one.
Embodiment
Many details are elaborated in the following description in order to fully understand the application.But the application can be with
Much it is different from other manner described here to implement, those skilled in the art can be in the situation without prejudice to the application intension
Under do similar popularization, therefore the application is not limited by following public specific implementation.
In this application, there is provided a kind of virtual image exchange method and system, Yi Zhongxu based on line holographic projections equipment
Intend vivid and a kind of computer-readable recording medium, be described in detail one by one in the following embodiments.
In the application, referring to Fig. 1, the imaging device is that line holographic projections equipment includes holographic film 102 and supporter 103,
The holographic film 102 is arranged on the supporter 103, and the supporter 103 is that the holographic film 102 provides support, described
Mobile device 101 is the top of supporter 103 installed in the line holographic projections equipment, so as to which the content of projection imaging is projected
Shown on the holographic film 102.
The mobile device 101 aligns with imaging device physical location reference, and realizes the mobile device 101
Interconnected with the signal of the imaging device.
The virtual image for operating in itself can be incident upon on the imaging device by the mobile device 101 to be shown
Show, and the mobile device 101 can be connected with cloud server and so that the virtual image possesses multi-modal man-machine friendship
Mutually, possess natural language understanding, visually-perceptible, touch perception, language voice output, emotional facial expressions action output etc.
Artificial Intelligence (AI) ability.
In the application, the virtual image can be shown with 3D virtual images by the line holographic projections equipment, tool
Standby specific image characteristics, and can be that the virtual image configures social property, personality attribute and personage's technical ability etc..
Specifically, the social property can include:Appearance, name, dress ornament, decoration, sex, native place, age, family
The attribute fields such as relation, occupation, position, religious belief, emotion state, educational background;The personality attribute can include:Personality, gas
The attribute fields such as matter;Personage's technical ability can include:Sing and dance, the professional skill such as tell a story, train, and personage's skill
The technical ability displaying for being not limited to limbs, expression, head and/or mouth can be shown.
In this application, the social property of virtual image, personality attribute and personage's technical ability etc. can cause multi-modal interaction
Parsing and the result of decision be more prone to or be more suitable for the virtual image.
The virtual image can also coordinate mobile device to project on imaging device simultaneously, and according to the imaging device
Deduced in the scene of displaying, i.e., described virtual image is deduced in the scene that the imaging device is shown, such as is sung
Song, dancing etc..
Referring to Fig. 2, for the structural representation of the virtual image interactive system based on line holographic projections equipment of the embodiment of the present application
Figure.
The virtual image interactive system based on line holographic projections equipment includes mobile device 215, imaging device 219 and high in the clouds
Server 210.
The mobile device 215 aligns with the physical location of imaging device 219 reference, and realizes the mobile device
215 interconnect with the signal of the imaging device 219.
The virtual image for operating in itself can be incident upon on the imaging device 219 and carry out by the mobile device 215
It has been shown that, and the mobile device 215 can be connected with the cloud server 210 so that operate in the mobile device 215
On the virtual image effect of multi-modal man-machine interaction is shown on the imaging device 219.
The mobile device 215 can include:Communication module 216, CPU 217 and man-machine interaction input and output
Module 218;
Wherein, the man-machine interaction input/output module 218, it is used to obtain multi-modal data and output virtual image
Parameter is performed, multi-modal data includes the data from surrounding environment and the multi-modal input data interacted with user;
The communication module 216, its ability interface for being used to call the cloud server 210 and reception pass through the cloud
The ability interface of end server 210 parses the multi-modal input data and goes out multi-modal output data with decision-making;
The CPU 217, for being calculated and the multi-modal output number using the multi-modal output data
According to corresponding reply data.
The cloud server 210 possesses multi-modal data parsing module, more for being sent to the mobile device 215
Modal data is parsed, and the multi-modal output data of decision-making.
The imaging device 219, it, which is used for the display in default viewing area, has specific vivid virtual image.
As shown in Fig. 2 corresponding logical process is called respectively in each ability interface of multi-modal data resolving.Below
For the explanation of each interface:
Semantic understanding interface 211, it receives the voice messaging from the communication module 216 forwarding, voice knowledge is carried out to it
The other and natural language processing based on a large amount of language materials.
Visual identity interface 212, human body, face, scene can be directed to according to computer vision algorithms make, deep learning algorithm
Deng progress video content detection, identification, tracking etc..Image is identified according to predetermined algorithm, the detection of quantitative
As a result.Possess image preprocessing function, feature extraction functions, decision making function and concrete application function;
Wherein, described image preprocessing function can carry out basic handling, including face to the vision collecting data of acquisition
Color space transformation, edge extracting, image conversion and image threshold;
The feature extraction functions can extract the feature such as the colour of skin of target, color, texture, motion and coordinate in image
Information;
The decision making function can be to characteristic information, and being distributed to according to certain decision strategy needs this feature information
Concrete application;
The concrete application function realizes the functions such as Face datection, human limbs identification, motion detection.
Affection computation interface 214, it receives the multi-modal data from the communication module 216 forwarding, utilizes affection computation
Logic (can be Emotion identification technology) calculates the current emotional state of user.Emotion identification technology is one of affection computation
Important component, the content of Emotion identification research include the sides such as facial expression, voice, behavior, text and physiological signal identification
Face, the emotional state of user is may determine that by above content.Emotion identification technology can only pass through vision Emotion identification skill
Art monitors the emotional state of user, can also be by the way of vision Emotion identification technology and sound Emotion identification technology combine
To monitor the emotional state of user, and it is not limited thereto.In the present embodiment, it is preferred to use the two mode combined monitors
Mood.
Affection computation interface 214 is when carrying out vision Emotion identification, and mankind face is collected by using image capture device
Portion's facial expression image, being then converted into can the technology progress expression mood analysis such as analyze data, recycling image procossing.Understand face
Expression, it usually needs the delicate change to expression detects, such as cheek muscle, mouth change and choose eyebrow etc..
Cognition calculates interface 213, and it receives the multi-modal data from the communication module 216 forwarding, and the cognition calculates
Interface 213 carries out data acquisition, identification and study to handle multi-modal data, to obtain user's portrait, knowledge mapping etc., with
Rational Decision is carried out to multi-modal output data.
One kind of the above-mentioned virtual image interactive system based on line holographic projections equipment for the embodiment of the present application is schematical
Technical scheme.For the ease of skilled artisan understands that the technical scheme of the application, the description below pass through multiple embodiments pair
The virtual image exchange method and system based on line holographic projections equipment of the application, a kind of virtual image and a kind of computer-readable
Storage medium is further detailed.
Referring to Fig. 3, the embodiment of the application one provides a kind of virtual image exchange method based on line holographic projections equipment, described
Virtual image is run in the mobile device, and the virtual image possesses default image characteristics and preset attribute, including step
301 to step 304.
Step 301:Multi-modal input data is obtained by mobile device and/or imaging device.
In the embodiment of the present application, the mobile device can be smart mobile phone, notebook computer, tablet personal computer, palm electricity
The computing device such as brain and other mobile terminals, the computing device can also be portable or state type server, by upper
State computing device and obtain multi-modal input data.
The imaging device is that the line holographic projections equipment can provide the carrier supported of basic projection imaging, and is passed through
The contents such as picture or word that the holographic film 102 shows the mobile device screen are shown, the mobile device
The main media interacted for the virtual image and user and environment, but be not precluded from the imaging device and can also gather
On signals such as vision, infrared and/or bluetooths, to aid in the mobile device to interact.
The mobile device is controlled to the display function of the imaging device, including appendicular aobvious to scene
Show, such as flowers, plants and trees in scene etc., the display to light, special efficacy, particle or ray, wherein the light, the special efficacy,
The particle and the ray can be shown by the imaging device.
The mobile device and the imaging device physical location with reference to aliging, with realize the mobile device with it is described into
As the signal of equipment interconnects.
The multi-modal input data includes:Speech data, text data, touch data, vision data and environmental data.
Step 302:The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing bag
Include:Natural language understanding, visually-perceptible, touch perception, language voice output and affection computation.
In the embodiment of the present application, the cloud server is to the speech data, the text data, the vision data
Or the environmental data (such as temperature data, humidity data, barometric information or position data) carry out voiceprint analysis, text analyzing,
Video analysis or environmental analysis (such as temperature parsing, humidity parsing, air pressure parsing or location resolution), analyzed and decision-making after
As a result.
In the embodiment of the present application, the product for the mobile device that can be stored in by contrast on the cloud server
Whether sequence number is consistent, if is activated, determines the binding relationship of the mobile device and the cloud server.
Step 303:Multi-modal output data is determined according to the analysis result.
In the embodiment of the present application, the analysis result by Cloud Server decision-making to be being exported with the multi-modal input number
According to corresponding voice, action, emotion or multi-medium data etc., the multi-modal output data is in by the virtual image
It is existing, control the imaging device to show the virtual image by the mobile device;Such as when the analysis result is to listen song:" please
Sing the nursery rhymes of first two tigers " when, the multi-modal output data of the virtual image then sings " two for the virtual image
This first nursery rhymes of tiger ".
Further, when the analysis result is that technical ability is shown, the multi-modal output data of the virtual image includes
Singing songses data and/or nautch data.
The analysis result is directed to the actual intention of the virtual image, such as " telling a story " or " dancing " for user
Deng, such as:Current user passes through phonetic entry:" singing, OK ", then by parsing and with reference to the activity of the virtual image
Lattice, the multi-modal output data that analytical strategy goes out the virtual image are bent to play " dance ma " this song.Can also be when parsing institute
When to state the analysis result after multi-modal input data be one or one section of song, the multi-modal output data of the virtual image is then
The next sentence or next section of song of the song are sung for the virtual image, song relay etc. is carried out, is entered according to actual conditions
Row setting, the application are not construed as limiting to this.
Step 304:The mobile device controls the output of the multi-modal output data, and is shown by the imaging device
The virtual image.
In the embodiment of the present application, the mobile device can control the virtual image of the virtual image with nozzle type, expression
And/or the multi-modal output data is presented in limb action;Such as when the analysis result is " picture ", the virtual image
Multi-modal output data is then the animation that virtual image output is drawn.
A kind of virtual image exchange method based on line holographic projections equipment that the embodiment of the present application provides, passes through mobile device
And/or imaging device obtains multi-modal input data, parses the multi-modal input data by cloud server and obtains parsing
As a result, the parsing includes:Natural language understanding, visually-perceptible, perception, language voice output and affection computation are touched, according to
The analysis result determines multi-modal output data, and the mobile device controls the output of the multi-modal output data, and by
The imaging device shows the multi-modal output data so that the multi-modal output data can use line holographic projections equipment
Present, third dimension is stronger, realizes the multi-modal man-machine interaction of smoothness, greatly enhances Consumer's Experience.
Referring to Fig. 4, the embodiment of the application one provides a kind of virtual image exchange method based on line holographic projections equipment, described
Virtual image is run in the mobile device, and the virtual image possesses default image characteristics and preset attribute, including step
401 to step 405.
Step 401:Multi-modal input data is obtained by mobile device and/or imaging device.
In the embodiment of the present application, the mobile device can be smart mobile phone, tablet personal computer, notebook and palm PC etc.
Intellectual computing device, multi-modal input data is obtained by the said equipment.
The imaging device provides the carrier supported of basic projection imaging, and by the holographic film 102 by the shifting
The contents such as picture or word that dynamic device screen show are shown, the mobile device be the virtual image and user and
The main media that environment interacts, but be not precluded from the imaging device and can also gather on vision, infrared and/or bluetooth
Deng signal, to aid in the mobile device to interact.
The mobile device and the imaging device physical location with reference to aliging, with realize the mobile device with it is described into
As the signal of equipment interconnects.
The multi-modal input data includes:Speech data, text data, touch data, vision data and environmental data.
Step 402:The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing bag
Include:Natural language understanding, visually-perceptible, touch perception, language voice output and affection computation.
In the embodiment of the present application, the cloud server is to the speech data, the text data, the vision data
Or the environmental data (such as temperature data, humidity data, barometric information or position data) carry out voiceprint analysis, text analyzing,
Video analysis or environmental analysis (such as temperature parsing, humidity parsing, air pressure parsing or location resolution, analyzed and decision-making after
As a result.
Step 403:The multi-modal output data of the virtual image is determined according to the analysis result.
In the embodiment of the present application, the analysis result can be a phonetic order, and one section of music, a video are either
Passage etc., the multi-modal output data of the virtual image is reply data corresponding with the analysis result;Such as when
The analysis result is a phonetic order " Chinese idiom building sequence:When excitedly ", the multi-modal output data of the virtual image
Then the Chinese idiom that " raging fire is such as sung " or " raging fire is very golden " first character is " strong " is answered for the virtual image.
Further, when the analysis result is that technical ability is shown, the multi-modal output data of the virtual image includes
Singing songses data and/or nautch data.
The multi-modal input data includes speech data, text data, touch data, vision data and environmental data,
Transmitted by the interconnection of mobile device and cloud server, and parsed beyond the clouds in server, i.e.,:Call semantic understanding, vision
Identification, affection computation and cognition computing capability, the current interaction intention of user is obtained, such as:Wish the virtual image danced,
Performance or impromptu demonstration etc., and the multi-modal output data of virtual image described in decision-making, sing to express one's emotion such as decision-making output and sing
Song, and jump dancing corresponding with the song.
The multi-modal input data can also be the action order of a control virtual image, such as:User is led to
Cross mobile device and say " please stop ", when mobile device is playing audio or speech, receive this and interrupt instruction, then can
Stop playing audio, while virtual image just stops exporting audio, the virtual image that the imaging device is showed then stops
Face and the action on head.
Step 404:The mobile device controls the output of the multi-modal output data.
In the embodiment of the present application, it is with voice, action or special efficacy that the mobile device, which controls the multi-modal output data,
Mode export.
Step 405:The imaging device shows the virtual image based on the multi-modal output data.
In the embodiment of the present application, virtual image voice or action according to corresponding to being made the multi-modal output data
When, shown by the imaging device;If such as the multi-modal output data is that the movement is set when jumping one section of dancing
It is standby to control the virtual image to realize that dancing is shown by the line holographic projections equipment.
A kind of virtual image exchange method based on line holographic projections equipment that the embodiment of the present application provides, passes through mobile device
And/or imaging device obtains multi-modal input data, parses the multi-modal input data by cloud server and obtains parsing
As a result, the parsing includes:Natural language understanding, visually-perceptible, perception, language voice output and affection computation are touched, according to
The analysis result determines multi-modal output data, and the mobile device controls the output of the multi-modal output data, described
Imaging device shows the virtual image based on the multi-modal output data so that virtual image can be set using line holographic projections
Standby to present, the third dimension of virtual image is stronger, realizes the multi-modal man-machine interaction of smoothness, greatly enhances Consumer's Experience.
Referring to Fig. 5, using the mobile device as smart mobile phone, the imaging device is exemplified by line holographic projections equipment, there is provided
A kind of virtual image exchange method based on line holographic projections equipment, application program fortune of the virtual image on smart mobile phone
OK, including step 501 is to step 504.
Step 501:Multi-modal input data is obtained by the microphone of smart mobile phone.
In the embodiment of the present application, illustrated below by taking interactive voice as an example.
Step 502:The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing bag
Include:Natural language understanding, visually-perceptible, touch perception, language voice output and affection computation.
In the embodiment of the present application, the cloud server carries out semantic understanding to above-mentioned voice, and it shows that user is current
Interaction is intended to:" output《Pity agriculture》This first poem ".
Step 503:Multi-modal output data is determined according to the analysis result.
In the embodiment of the present application, according to the analysis result:" output《Pity agriculture》This first poem ", determines multi-modal output data
For " the cob of corn sown in springtime yields a hundredfold in autumn.Still died of hunger without vacant field, farmer in the four seas.Uprooted midday standing grain day, soil under droplet standing grain.Who is known in disk
Meal, Every single grain is the fruit of hard work." text, the animation for chanting poem of image and the virtual image.
Step 504:The smart mobile phone controls the virtual image by the line holographic projections equipment with nozzle type, expression
And/or the multi-modal output data is presented in limb action.
In the embodiment of the present application, the smart mobile phone controls the virtual image by the line holographic projections equipment with mouth
The multi-modal output data is presented in type, expression and/or limb action, i.e., " spring sowing is shown in the line holographic projections equipment
One grain, ten thousand sons of autumn harvest.Still died of hunger without vacant field, farmer in the four seas.Uprooted midday standing grain day, soil under droplet standing grain.Who knows surve on human's plate, grain grain
It is all arduous." text, the animating image for chanting poem of image and the virtual image.
A kind of virtual image exchange method based on line holographic projections equipment that the embodiment of the present application provides, passes through smart mobile phone
Multi-modal input data is obtained, the multi-modal input data is parsed by cloud server and obtains analysis result, according to described
Analysis result determines multi-modal output data, and the smart mobile phone controls the line holographic projections equipment with image, word, sound
And/or the multi-modal output data is presented in the mode of special efficacy so that the multi-modal output data can use line holographic projections
Equipment is presented, and third dimension is stronger, is realized the multi-modal man-machine interaction of smoothness, is greatly enhanced Consumer's Experience.
Referring to Fig. 6, using the mobile device as tablet personal computer, the imaging device is exemplified by line holographic projections equipment, there is provided
A kind of virtual image exchange method based on line holographic projections equipment, application program fortune of the virtual image on tablet personal computer
OK, including step 601 is to step 605.
Step 601:Multi-modal input data is obtained by tablet personal computer.
In the embodiment of the present application, illustrate below by taking interactive voice as an example.
Step 602:The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing bag
Include:Natural language understanding, visually-perceptible, touch perception, language voice output and affection computation.
In the embodiment of the present application, the voice " you can dance " that the cloud server inputs to user carries out semantic reason
Solution, it shows that the current interaction of user is intended to:" allowing virtual image to dance ".
Step 603:The multi-modal output data of the virtual image is determined according to the analysis result.
In the embodiment of the present application, the virtual image can be that appearance is sweet, wears peacock clothes, age at 18 years old or so
And there are the 3D images of the women dancer Ah Q of literature and art breath, the virtual image can also configure other according to the actual requirements
Social property, personality attribute and personage's technical ability etc., the application is not construed as limiting to this.
According to the analysis result:" dancing ", the multi-modal output data that decision-making goes out the virtual image are:Allow the void
Intend vivid Ah Q and jump an ancient customs dancing.
Step 604:The tablet personal computer controls the output of the multi-modal output data.
In the embodiment of the present application, the virtual image Ah Q is operated in the application program of the tablet personal computer, therefore described
Tablet personal computer controls the multi-modal output data to be shown when receiving the multi-modal output data:Described in control
Virtual image Ah Q coordinates light blue background frame, ancient customs dance music, jumps ancient customs dancing by limb action.
Step 605:The line holographic projections equipment shows the virtual image based on the multi-modal output data.
In the embodiment of the present application, the line holographic projections equipment shows the virtual image based on the multi-modal output data
Ah Q, i.e., described virtual image Ah Q are shown in the line holographic projections equipment and jumped based on the multi-modal output data
Ancient customs dancing.
A kind of virtual image exchange method based on line holographic projections equipment that the embodiment of the present application provides, passes through tablet personal computer
Multi-modal input data is obtained, the multi-modal input data is parsed by cloud server and obtains analysis result, according to described
Analysis result determines the multi-modal output data of the virtual image, and the tablet personal computer controls the multi-modal output data
Output, the line holographic projections equipment show the virtual image based on the multi-modal output data so that the virtual image
Line holographic projections equipment can be used to present, the virtual image third dimension is stronger, realizes the multi-modal man-machine interaction of smoothness, greatly
Enhance Consumer's Experience.
The application offer, which also provides a kind of virtual image interactive system based on line holographic projections equipment, to be included:
Including mobile device, imaging device and cloud server, the virtual image is run in the mobile device, and institute
State virtual image and possess default image characteristics and preset attribute, wherein,
The mobile device and/or the imaging device obtain multi-modal input data;
The cloud server parses the multi-modal input data and obtains analysis result, and the parsing includes:It is natural
Language understanding, visually-perceptible, touch perception, language voice output and affection computation;
The mobile device determines multi-modal output data according to the analysis result;
The mobile device controls the output of the multi-modal output data, and by the imaging device show it is described virtually
Image.
Alternatively, the mobile device can also control the virtual image to be presented with nozzle type, expression and/or limb action
The multi-modal output data.
The virtual image interactive system based on line holographic projections equipment that the embodiment of the present application is provided, it is possible to achieve use shifting
Dynamic equipment or projector equipment obtain multi-modal input data, parse the multi-modal input data by cloud server and obtain solution
Result is analysed, multi-modal output data is determined according to the analysis result, the line holographic projections equipment is controlled by the mobile device
The multi-modal output data is presented so that virtual image can use line holographic projections equipment to present, the third dimension of virtual image
It is stronger, the multi-modal man-machine interaction of smoothness is realized, greatly enhances Consumer's Experience.
The exemplary scheme of the above-mentioned virtual image interactive system based on line holographic projections equipment for the present embodiment.Need
It is bright, should virtual image interactive system based on line holographic projections equipment technical scheme with based on the virtual of line holographic projections equipment
The technical scheme of vivid exchange method belongs to same design, the technical side of the virtual image interactive system based on line holographic projections equipment
The detail content that case is not described in detail, it may refer to the technical scheme of the virtual image exchange method based on line holographic projections equipment
Description.
The embodiment of the application one also provides a kind of virtual image, and the virtual image is run in mobile device, and by being imaged
Equipment shows the virtual image;
The mobile device and the imaging device physical location with reference to aliging, with realize the mobile device with it is described into
As the signal of equipment interconnects;
The virtual image performs the step of above-mentioned virtual image exchange method based on line holographic projections equipment.
A kind of exemplary scheme of above-mentioned virtual image for the present embodiment.It should be noted that the skill of the virtual image
Art scheme and the technical scheme of the above-mentioned virtual image exchange method based on line holographic projections equipment belong to same design, and this is virtual
The detail content that the technical scheme of image is not described in detail, it may refer to the above-mentioned virtual image based on line holographic projections equipment and hand over
The description of the technical scheme of mutual method.
The embodiment of the application one also provides a kind of computer-readable recording medium, and it is stored with computer program, the program
The step of above-mentioned virtual image exchange method based on line holographic projections equipment is realized when being executed by processor.
A kind of exemplary scheme of above-mentioned computer-readable recording medium for the present embodiment.It should be noted that the meter
The technical scheme of calculation machine readable storage medium storing program for executing and the technical side of the above-mentioned virtual image exchange method based on line holographic projections equipment
Case belongs to same design, the detail content that the technical scheme of the computer-readable recording medium is not described in detail, may refer to
The description of the technical scheme of the above-mentioned virtual image exchange method based on line holographic projections equipment.
The computer instruction includes computer program code, the computer program code can be source code form,
Object identification code form, executable file or some intermediate forms etc..The computer-readable medium can include:Institute can be carried
Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, the computer for stating computer program code store
Device, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory),
Electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the computer-readable medium include it is interior
Appropriate increase and decrease can be carried out according to legislation in jurisdiction and the requirement of patent practice by holding, such as in some jurisdictions of courts
Area, electric carrier signal and telecommunication signal are not included according to legislation and patent practice, computer-readable medium.
It should be noted that for foregoing each method embodiment, in order to which simplicity describes, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the application is not limited by described sequence of movement because
According to the application, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, and involved action and module might not all be this Shens
Please be necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
The application preferred embodiment disclosed above is only intended to help and illustrates the application.Alternative embodiment is not detailed
All details are described, it is only described embodiment also not limit the invention.Obviously, according to the content of this specification,
It can make many modifications and variations.This specification is chosen and specifically describes these embodiments, is to preferably explain the application
Principle and practical application so that skilled artisan can be best understood by and utilize the application.The application is only
Limited by claims and its four corner and equivalent.
Claims (10)
1. a kind of virtual image exchange method based on line holographic projections equipment, it is characterised in that the virtual image is in the shifting
Dynamic equipment operation, and the virtual image possesses default image characteristics and preset attribute, methods described includes:
Multi-modal input data is obtained by mobile device and/or imaging device;
The multi-modal input data is parsed by cloud server and obtains analysis result, the parsing includes:Natural language is managed
Solution, visually-perceptible, touch perception, language voice output and affection computation;
Multi-modal output data is determined according to the analysis result;
The mobile device controls the output of the multi-modal output data, and shows the virtual shape by the imaging device
As.
2. according to the method for claim 1, it is characterised in that the mobile device is joined with the imaging device physical location
According to alignment, to realize that the signal of the mobile device and the imaging device interconnects.
3. according to the method for claim 1, it is characterised in that the mobile device controls the multi-modal output data
Output includes:
It is described more that the mobile device controls the virtual image of the virtual image to be presented with nozzle type, expression and/or limb action
Mode output data.
4. according to the method described in claim 1-3 any one, it is characterised in that the imaging device shows the virtual shape
As including:
The imaging device shows the virtual image based on the multi-modal output data.
5. according to the method for claim 1, it is characterised in that the multi-modal input data includes:
Speech data, text data, touch data, vision data and environmental data.
6. according to the method for claim 1, it is characterised in that multi-modal output data packet is determined according to the analysis result
Include:
When the analysis result is that technical ability is shown, the multi-modal output data of the virtual image includes singing songses data
And/or nautch data.
7. a kind of virtual image interactive system based on line holographic projections equipment, it is characterised in that including mobile device, imaging device
And cloud server, the virtual image are run in the mobile device, and the virtual image possess default image characteristics and
Preset attribute, wherein,
The mobile device and/or the imaging device obtain multi-modal input data;
The cloud server parses the multi-modal input data and obtains analysis result, and the parsing includes:Natural language
Understanding, visually-perceptible, touch perception, language voice output and affection computation;
The mobile device determines multi-modal output data according to the analysis result;
The mobile device controls the output of the multi-modal output data, and shows the virtual shape by the imaging device
As.
8. system according to claim 7, it is characterised in that the mobile device can also control the virtual image
The multi-modal output data is presented with nozzle type, expression and/or limb action in virtual image.
A kind of 9. virtual image, it is characterised in that the virtual image is run in mobile device, and as described in being shown imaging device
Virtual image;
The mobile device aligns with imaging device physical location reference, to realize that the mobile device is set with the imaging
Standby signal interconnection;
The virtual image perform claim requires the step of 1-6 any one methods describeds.
10. a kind of computer-readable recording medium, it is stored with computer program, it is characterised in that the program is held by processor
The step of claim 1-6 any one methods describeds are realized during row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711231944.XA CN107831905A (en) | 2017-11-30 | 2017-11-30 | A kind of virtual image exchange method and system based on line holographic projections equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711231944.XA CN107831905A (en) | 2017-11-30 | 2017-11-30 | A kind of virtual image exchange method and system based on line holographic projections equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107831905A true CN107831905A (en) | 2018-03-23 |
Family
ID=61646830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711231944.XA Pending CN107831905A (en) | 2017-11-30 | 2017-11-30 | A kind of virtual image exchange method and system based on line holographic projections equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107831905A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665492A (en) * | 2018-03-27 | 2018-10-16 | 北京光年无限科技有限公司 | A kind of Dancing Teaching data processing method and system based on visual human |
CN108762486A (en) * | 2018-04-26 | 2018-11-06 | 上海蓝眸多媒体科技有限公司 | A kind of multimedia intelligent interactive device |
CN109410297A (en) * | 2018-09-14 | 2019-03-01 | 重庆爱奇艺智能科技有限公司 | It is a kind of for generating the method and apparatus of avatar image |
CN109447020A (en) * | 2018-11-08 | 2019-03-08 | 郭娜 | Exchange method and system based on panorama limb action |
CN110147196A (en) * | 2018-12-04 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Interaction control method and device, storage medium and electronic device |
CN110673716A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for interaction between intelligent terminal and user and storage medium |
CN110795017A (en) * | 2019-09-27 | 2020-02-14 | 深圳市大拿科技有限公司 | Virtual button control method and related product thereof |
CN111131913A (en) * | 2018-10-30 | 2020-05-08 | 王一涵 | Video generation method and device based on virtual reality technology and storage medium |
CN111179694A (en) * | 2019-12-02 | 2020-05-19 | 广东小天才科技有限公司 | Dance teaching interaction method, intelligent sound box and storage medium |
CN111309862A (en) * | 2020-02-10 | 2020-06-19 | 贝壳技术有限公司 | User interaction method and device with emotion, storage medium and equipment |
CN113238654A (en) * | 2021-05-19 | 2021-08-10 | 宋睿华 | Multi-modal based reactive response generation |
CN113687712A (en) * | 2020-05-18 | 2021-11-23 | 阿里巴巴集团控股有限公司 | Control method and device and electronic device |
CN113821104A (en) * | 2021-09-17 | 2021-12-21 | 武汉虹信技术服务有限责任公司 | Visual interactive system based on holographic projection |
CN116843805A (en) * | 2023-06-19 | 2023-10-03 | 上海奥玩士信息技术有限公司 | Method, device, equipment and medium for generating virtual image containing behaviors |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104898581A (en) * | 2014-03-05 | 2015-09-09 | 青岛海尔机器人有限公司 | Holographic intelligent center control system |
CN106843598A (en) * | 2015-12-03 | 2017-06-13 | 深圳市摩购科技有限公司 | Realize system, method and the projection control of product introduction projection control |
CN107180238A (en) * | 2017-07-27 | 2017-09-19 | 深圳市泰衡诺科技有限公司 | A kind of image preview device and method of intelligent terminal |
CN107340859A (en) * | 2017-06-14 | 2017-11-10 | 北京光年无限科技有限公司 | The multi-modal exchange method and system of multi-modal virtual robot |
-
2017
- 2017-11-30 CN CN201711231944.XA patent/CN107831905A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104898581A (en) * | 2014-03-05 | 2015-09-09 | 青岛海尔机器人有限公司 | Holographic intelligent center control system |
CN106843598A (en) * | 2015-12-03 | 2017-06-13 | 深圳市摩购科技有限公司 | Realize system, method and the projection control of product introduction projection control |
CN107340859A (en) * | 2017-06-14 | 2017-11-10 | 北京光年无限科技有限公司 | The multi-modal exchange method and system of multi-modal virtual robot |
CN107180238A (en) * | 2017-07-27 | 2017-09-19 | 深圳市泰衡诺科技有限公司 | A kind of image preview device and method of intelligent terminal |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665492A (en) * | 2018-03-27 | 2018-10-16 | 北京光年无限科技有限公司 | A kind of Dancing Teaching data processing method and system based on visual human |
CN108665492B (en) * | 2018-03-27 | 2020-09-18 | 北京光年无限科技有限公司 | Dance teaching data processing method and system based on virtual human |
CN108762486A (en) * | 2018-04-26 | 2018-11-06 | 上海蓝眸多媒体科技有限公司 | A kind of multimedia intelligent interactive device |
CN110673716A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for interaction between intelligent terminal and user and storage medium |
CN109410297A (en) * | 2018-09-14 | 2019-03-01 | 重庆爱奇艺智能科技有限公司 | It is a kind of for generating the method and apparatus of avatar image |
CN111131913A (en) * | 2018-10-30 | 2020-05-08 | 王一涵 | Video generation method and device based on virtual reality technology and storage medium |
CN109447020A (en) * | 2018-11-08 | 2019-03-08 | 郭娜 | Exchange method and system based on panorama limb action |
EP3893099A4 (en) * | 2018-12-04 | 2022-04-27 | Tencent Technology (Shenzhen) Company Limited | Interaction control method and apparatus, storage medium and electronic apparatus |
CN110147196A (en) * | 2018-12-04 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Interaction control method and device, storage medium and electronic device |
US11947789B2 (en) | 2018-12-04 | 2024-04-02 | Tencent Technology (Shenzhen) Company Limited | Interactive control method and apparatus, storage medium, and electronic device |
CN110795017A (en) * | 2019-09-27 | 2020-02-14 | 深圳市大拿科技有限公司 | Virtual button control method and related product thereof |
CN111179694A (en) * | 2019-12-02 | 2020-05-19 | 广东小天才科技有限公司 | Dance teaching interaction method, intelligent sound box and storage medium |
CN111309862A (en) * | 2020-02-10 | 2020-06-19 | 贝壳技术有限公司 | User interaction method and device with emotion, storage medium and equipment |
CN113687712A (en) * | 2020-05-18 | 2021-11-23 | 阿里巴巴集团控股有限公司 | Control method and device and electronic device |
CN113238654A (en) * | 2021-05-19 | 2021-08-10 | 宋睿华 | Multi-modal based reactive response generation |
CN113821104A (en) * | 2021-09-17 | 2021-12-21 | 武汉虹信技术服务有限责任公司 | Visual interactive system based on holographic projection |
CN116843805A (en) * | 2023-06-19 | 2023-10-03 | 上海奥玩士信息技术有限公司 | Method, device, equipment and medium for generating virtual image containing behaviors |
CN116843805B (en) * | 2023-06-19 | 2024-03-19 | 上海奥玩士信息技术有限公司 | Method, device, equipment and medium for generating virtual image containing behaviors |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107831905A (en) | A kind of virtual image exchange method and system based on line holographic projections equipment | |
CN110163054B (en) | Method and device for generating human face three-dimensional image | |
CN111833418B (en) | Animation interaction method, device, equipment and storage medium | |
Nourbakhsh | Robot futures | |
CN107944542A (en) | A kind of multi-modal interactive output method and system based on visual human | |
Webb et al. | Beginning kinect programming with the microsoft kinect SDK | |
CN110163048A (en) | Identification model training method, recognition methods and the equipment of hand key point | |
CN108052250A (en) | Virtual idol deductive data processing method and system based on multi-modal interaction | |
CN106710590A (en) | Voice interaction system with emotional function based on virtual reality environment and method | |
CN108037825A (en) | The method and system that a kind of virtual idol technical ability is opened and deduced | |
CN109271018A (en) | Exchange method and system based on visual human's behavioral standard | |
CN108942919B (en) | Interaction method and system based on virtual human | |
CN108665492A (en) | A kind of Dancing Teaching data processing method and system based on visual human | |
CN107679519A (en) | A kind of multi-modal interaction processing method and system based on visual human | |
CN109434833A (en) | Control system, method and the storage medium of AI intelligence programming bio-robot | |
Szwoch et al. | Emotion recognition for affect aware video games | |
CN109871450A (en) | Based on the multi-modal exchange method and system for drawing this reading | |
CN109343695A (en) | Exchange method and system based on visual human's behavioral standard | |
CN109278051A (en) | Exchange method and system based on intelligent robot | |
CN109176535A (en) | Exchange method and system based on intelligent robot | |
CN109324688A (en) | Exchange method and system based on visual human's behavioral standard | |
CN206711600U (en) | The voice interactive system with emotive function based on reality environment | |
CN109086860A (en) | A kind of exchange method and system based on visual human | |
CN109584992A (en) | Exchange method, device, server, storage medium and sand play therapy system | |
CN108595012A (en) | Visual interactive method and system based on visual human |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180323 |
|
RJ01 | Rejection of invention patent application after publication |