Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described.
The application scenarios of the generation method of virtual scene provided by the embodiments of the present application are introduced below.
Referring to Figure 1, the signal of the application scenarios of the generation method of virtual scene provided by the embodiments of the present application is shown
Figure, which includes interactive system 10, which can be applied to remote session.The interactive system 10 includes:
Multiple terminal devices 100 and server 200, wherein terminal device 100 is connect with server 200.
In some embodiments, terminal device 100 is communicated to connect by network and server 200, thus terminal device
Data interaction can be carried out between 100 and server 200.Wherein, terminal device 100 can pass through net where couple in router
Network, and communicated by network where router and between server 200, certainly, terminal device 100 can also pass through data
It is communicated between network and server 200.
In some embodiments, terminal device 100 can be head-wearing display device, be also possible to the shifting such as mobile phone, plate
Dynamic equipment.When terminal device 100 is head-wearing display device, head-wearing display device can be integral type head-wearing display device.Terminal
Equipment 100 is also possible to the intelligent terminals such as the mobile phone connecting with circumscribed/access type head-wearing display device, i.e. terminal device 100
It can be used as the processing and storage equipment of head-wearing display device, insertion or access circumscribed head-wearing display device, wearing display
Virtual content is shown in device.In remote session, terminal device 100 can be used for the virtual session field to remote session
Scape is shown, realizes the scenic picture of virtual session scene carrying out AR (Augmented Reality, augmented reality)
Display or VR (Virtual Reality, virtual reality technology) display promote the display effect of scenic picture in remote session
Fruit.In other embodiments, terminal device 100 is also possible to the display equipment such as computer, tablet computer, TV, terminal device
100 can be shown the corresponding 2D of virtual session scene (2Dimensions, two dimension) picture.
In some embodiments, terminal device 100 can acquire the information data in remote session (for example, acquisition is used
Facial information, the voice data at family etc.), to construct threedimensional model according to the information data.In other embodiments, terminal
Equipment 100 can also be modeled with information datas such as facial information according to the pre-stored data, voice data, body models, can also
To combine pre-stored information data and collected information data to be modeled.For example, terminal device 100 can be real-time
Acquisition face information establishes facial model, wherein face information may include expression information and form and action information (as partially
Head, point head etc.), facial model is integrated again with preset body model then, saves the time of modeling, rendering
While, expression, the form and action of user can be also obtained in real time.In some embodiments, terminal device 100 can will be above-mentioned
The information data transmission of acquisition is to server 200 or other terminal devices 100.
In some embodiments, Fig. 2 is referred to, interactive system 100 also can also include information collecting device 300, letter
Breath acquisition device 300 is used to acquire above- mentioned information data (for example, facial information, voice data etc. of acquisition user), and will adopt
The information data transmission of collection is to terminal device 100 or server 200.In some embodiments, information collecting device 300 can
Including camera, audio-frequency module etc., it may also comprise the various kinds of sensors such as optical sensor, sonic transducer.As a kind of specific reality
Mode is applied, information collecting device 300 can be the shooting with common color camera (RGB) and depth camera (Depth) function
Equipment (such as RGB-D depth camera), to obtain the depth data for the user being taken, to obtain the corresponding three-dimensional knot of user
Structure.In a specific embodiment, information collecting device 300 and terminal device 100 may be at same site environment, so as to
The information of the corresponding user of acquisition terminal equipment 100, can connect between information collecting device 300 and terminal device 100, can also
To be not connected to, it is not limited thereto.
In some embodiments, above-mentioned server 200 can be local server, be also possible to Cloud Server, specifically
The type of server 200 in the embodiment of the present application can be not as restriction.In remote session, server 200 can be used for reality
Data interaction between existing 100/ information collecting device 300 of multiple terminal devices, to guarantee 100/ information of multiple terminal devices
Between acquisition device 300 data transmission with it is synchronous, realization remote session in virtual session scene, audio, video data synchronization,
Data transmission between 100/ information collecting device 300 of terminal device etc..
In some embodiments, when there are at least two terminal devices in the multiple terminal devices 100 in remote session
100 in same site environment (for example, be in same interior) when, at least two terminal devices in same site environment
It can also be communicated by bluetooth, WiFi (Wireless-Fidelity, Wireless Fidelity), ZigBee (purple peak technology) etc. between 100
Mode connects, and can also be attached by wired communication modes such as data lines, to realize in same site environment extremely
The interaction of data between few two terminal devices 100.Certainly, at least two terminal devices 100 in same site environment it
Between connection type in the embodiment of the present application can be not as restriction.
The processing method of specific virtual scene is introduced below.
Referring to Fig. 3, the embodiment of the present application provides a kind of processing method of virtual scene, the processing side of the virtual scene
Method may include:
Step S110: generating the corresponding virtual session scene of remote session, and virtual session scene includes at least remote session
In the corresponding virtual objects of one or more terminal device.
Remote session refers to the process that remote interaction, communication are carried out by the multiterminal that data communication is established.Virtual session
Scene is 3D (3Dimensions, three-dimensional) scene in Virtual Space, and virtual session scene can include at least virtual objects,
Virtual objects are fixed with respect to the position of world coordinates origin in world coordinate system in virtual session scene.Wherein, virtual objects can
To include virtual portrait model, virtual portrait head portrait corresponding to the terminal device in remote session etc., for example, terminal device pair
The artificial stereo image of the user answered;Virtual objects can also include and the virtual object image of user-association, virtual animal figure
As etc., it is not limited thereto, for example, in the virtual document that virtual objects can also also be shared including terminal device in remote session
Hold etc..Certainly, the particular content in virtual session scene can be not as restriction, for example, can also be in virtual session scene
Including virtual conference table, virtual tablecloth, virtual goods of furniture for display rather than for use etc..
In some embodiments, server can be according to the participation data of the terminal device for participating in remote session and virtual
The content-data of object generates virtual session scene.Wherein, the identity that data may include the corresponding user of terminal device is participated in
Information, terminal device be added remote session time, terminal device spatial position in reality scene, terminal device appearance
One or more of state and terminal device Location data;The content-data of virtual objects can be virtual objects
Three-dimensional modeling data, three-dimensional modeling data may include for constructing the color of the corresponding model of threedimensional model, model vertices
Coordinate, model silhouette data etc..The participation data of terminal device can be in same existing from terminal device or with terminal device
The information collecting device of real field scape obtains, and the content-data of virtual objects can store in local, can also obtain from terminal device
It takes, is not limited thereto.
As a kind of specific embodiment, server can be corresponding to terminal device virtual right according to participation data
As carrying out position arrangement, according to position arrangement as a result, determining position of the virtual objects in Virtual Space after, further according to virtual
Position of the content-data and virtual objects of object in Virtual Space, it includes that terminal device is corresponding virtual right that rendering, which generates,
The virtual session scene of elephant.
In the embodiment of the present application, the virtual session scene of generation can be used for generating the virtual session that terminal device is shown
The scenic picture of scene, and the picture data of scenic picture is sent to terminal device, terminal device can be by scenic picture
It is shown, user is made to be observed that the virtual session scene of 3D, and be observed that in remote session that other terminals are set
Standby corresponding virtual objects, make user experience the stronger sense of reality.For example, referring to Fig. 4, Fig. 4 shows teleconference field
The scene figure of scape, wherein terminal device 100 can be head-wearing display device, and user 601 is in entity desk body week in real scene
The position enclosed, user 601 can observe the scenic picture of virtual session scene, virtual session scene by head-wearing display device
Scenic picture may include other participate in teleconferences user virtual portrait 701.
Step S120: the corresponding user data of one or more terminal devices is obtained.
In some embodiments, the corresponding number of users of each terminal device in the available remote session of server
According to determine the corresponding user emotion information of terminal device according to user data.Wherein, user data may include the face of user
Portion's image, voice data of user etc., are not limited thereto, for example, also can also include the signs numbers such as heartbeat, the blood pressure of user
According to.The corresponding user data of terminal device can be acquired by terminal device, can also be by being in same live ring with terminal device
The information collecting device in border acquires, and is not limited thereto.
Step S130: analyzing user data, obtains the corresponding user emotion information of one or more terminal devices.
In some embodiments, server can be analyzed according to the corresponding user data of terminal device in remote session
User's expression of user, user's look, user's tone etc. obtain the corresponding user emotion letter of terminal device based on the analysis results
Breath.Wherein, user emotion information can may include for the information of characterization user emotion, user emotion like, indignation, it is sad,
Surprised, frightened, doubt, absorbed, absent-minded etc., are not limited thereto.
Step S140: when user emotion information meets setting mood condition, the use with satisfaction setting mood condition is obtained
The matched Suitable content of family emotional information, and virtual session scene is adjusted according to Suitable content.
Server can then determine the user of terminal device after analysis obtains the corresponding user emotion information of terminal device
Whether emotional information meets setting mood condition.Wherein, setting mood condition may include specified mood, for example, angry, frightened
It is surprised, feels uncertain;The degree that mood condition can also include specified mood is set, for example, angry degree, surprised degree etc..Tool
The setting mood condition of body can be set not as restriction according to the actual scene and demand of remote session.
When server determines that user emotion information meets setting mood condition, server is available to be set with satisfaction
The Suitable content of the user emotion information matches of mood condition.Wherein, Suitable content is used to be adjusted virtual session scene,
Suitable content may include that specific adjustment operation etc. is carried out to the display picture in virtual session scene, for example, Suitable content can
To include being blocked to the carry out clarity adjustment in virtual session scene, progress brightness adjustment, progress content replacement, progress content
Deng specific Suitable content can be not as restriction, for example, Suitable content also may include that at least partly content is marked
And the content-data etc. of label content.
In some embodiments, server gets the tune with the user emotion information matches for meeting setting mood condition
After whole content, virtual session scene can be adjusted according to Suitable content.As a kind of specific embodiment, service
Device can be adjusted virtual objects according to Suitable content, and it is corresponding virtual which can be target terminal equipment
Object, target terminal equipment can be terminal device corresponding to the user emotion information for meeting setting mood condition, can also be with
It is other terminal devices, is not limited thereto.For example, the corresponding virtual objects of the adjustable target terminal equipment of server is clear
Clear degree, brightness etc..
In the embodiment of the present application, the generation of virtual session scene, the acquisition of user data, analysis user emotion information with
And the step of adjustment virtual session scene, that is, the step of step S110 to step S140, it can also be executed by terminal device.
When terminal device is as executing subject, executes above-mentioned steps, then may include in the virtual session scene that terminal device generates
The corresponding virtual objects of at least one terminal device, the user data of acquisition may include the corresponding use of at least one terminal device
User data, at least one terminal device are other terminal devices in remote session in addition to executing subject.For example, ought remotely can
It include two terminal devices (first terminal equipment and second terminal equipment) in words, first terminal equipment executes above-mentioned steps
When, then the virtual session scene that can be generated only includes the corresponding virtual objects of second terminal equipment, only obtains second terminal and sets
Standby corresponding user data.
The virtual session scene in remote session may be implemented in the processing method of virtual scene provided by the embodiments of the present application
In, according to the matched Suitable content of user emotion information institute for meeting setting mood condition, virtual session scene is adjusted,
It can reflect user emotion, convenient for mutually understanding the mood of user in remote session between user, really experience to user, mention
Rise the effect of remote session.
Referring to Fig. 5, another embodiment of the application provides the processing method of virtual scene, the processing side of the virtual scene
Method may include:
Step S210: generating the corresponding virtual session scene of remote session, and virtual session scene includes at least remote session
In the corresponding virtual objects of one or more terminal device.
Step S220: the corresponding user data of one or more terminal devices is obtained.
In the embodiment of the present application, step S210 and step S220 can be refering to the content of above-described embodiment, herein no longer
It repeats.
Step S230: analyzing user data, obtains the corresponding user emotion information of one or more terminal devices.
In some embodiments, the user data for the terminal device that server obtains may include face's figure of user
Picture.Wherein, the face image of user can be acquired by camera of terminal device etc., can also be by being in same with terminal device
The image acquisition device of site environment, is not limited thereto.
As an implementation, user data is analyzed, obtains the corresponding user of one or more terminal devices
Emotional information may include:
According to face image, at least the one of the corresponding user's expression of one or more terminal devices and user's look is obtained
Kind;According to user's expression and at least one of user's look, the corresponding user emotion information of one or more terminal devices is obtained.
Wherein, server can carry out Expression Recognition to face image, obtain terminal device according to the face image of user
Corresponding user's expression.User's expression can be used for characterizing user emotion in the performance of face, for example, smile, curl one's lip, staring blankly,
Cry etc., it is not limited thereto.As an implementation, server face image can be marked, gray processing, normalizing
After the pretreatments such as change, the five features in face image is extracted, user's expression is determined according to the five features of extraction, specific point
The mode for analysing user's expression can be not as restriction.
Server can also carry out look identification to face image, obtain terminal device pair according to the face image of user
The user's look answered.The expressed heart activity out of the expression that the face that user's look can be used for characterizing user reveals,
For example, it is tried out, be scared, happiness etc..As an implementation, server can extract user's expression according to face image
Afterwards, user's look is gone out according to user's Expression analysis, the mode of concrete analysis user's look can be not as restriction.
In some embodiments, server arrives user's feelings of user after obtaining user's expression and user's look
Thread information.
As another embodiment, the user data for the terminal device that server obtains also may include the voice of user
Data.Wherein, the voice data of user can be acquired by Audio Input Modules such as the microphones of terminal device, can also by with end
End equipment is in the audio collecting device acquisition of same site environment, is not limited thereto.
In some embodiments, user data is analyzed, obtains the corresponding user of one or more terminal devices
Emotional information, comprising:
The corresponding user's tone of one or more terminal devices is obtained according to voice data;One is obtained according to user's tone
Or the corresponding user emotion information of multiple terminal devices.
Wherein, server can carry out speech analysis to voice data according to the voice data of user, obtain terminal and set
Standby corresponding user's tone.User's tone can be used for being characterized in the sound of specific sentence under certain concretism emotion dominates
Form, for example, query, exclamation, statement etc., are not limited thereto.As a kind of specific embodiment, server can be to language
Sound data are analyzed, and the speech parameter relevant to spoken utterance gas such as speech volume, tone, voice content are obtained, according to voice
The design parameter value of parameter determines user's tone, and the mode of concrete analysis user's tone can be not as restriction.Server can be with
User's tone is further analyzed, the user emotion information of user can be obtained.Certainly, it obtains and uses with specific reference to user's tone
The embodiment of family emotional information can be not as restriction.
In some embodiments, server can also be simultaneously according to the face image and voice data of user, analysis
The user emotion information of user.
In other embodiment, server can also utilize trained training pattern, by face image, voice
Data etc. are input to training pattern, obtain the user emotion information of user.Wherein, training pattern can be inputted by a large amount of training sets
It is obtained to initial model training, initial model can be neural network model, decision-tree model etc., and training set can be by largely marking
Note has the face image of user emotion, voice data to constitute.
Step S240: when user emotion information meets setting mood condition, the use with satisfaction setting mood condition is obtained
The matched Suitable content of family emotional information.
In some embodiments, Suitable content may include for at least partly content in virtual session scene into
Line definition adjustment carries out brightness adjustment, carries out content replacement and carries out at least one for the instruction information that content is blocked;
Suitable content can also include carry out the adjusting parameter of clarity adjustment, the adjusting parameter of brightness adjustment, the content as replacement,
And it as the content etc. blocked, is not limited thereto.
Further, Suitable content and meet setting mood condition user emotion information matches so that server according to
Suitable content adjust virtual session scene after, can feedback user mood or avoid other users by meet set mood condition
User emotion information influence.For example, meet set the user emotion information of mood condition as it is angry when, Suitable content can be with
It is that virtual objects are blocked, is also possible to turn down the clarity of virtual objects.In another example meeting the use of setting mood condition
When family emotional information is sad, Suitable content, which can be, is replaced virtual objects, is also possible to turn down the bright of virtual objects
Degree.Certainly, the Suitable content specifically with the user emotion information matches for meeting setting mood condition can be not as restriction.
Step S250: according to Suitable content, clarity, the adjustment for adjusting at least partly content in virtual session scene are empty
The brightness of at least partly content in quasi- session context, at least partly content in virtual session scene is replaced and
At least one of at least partly content in virtual session scene block.
In some embodiments, server is according to Suitable content, in adjustable virtual session scene at least partly
Content.Wherein, at least partly content can be the corresponding virtual objects of terminal device in virtual session scene.
As an implementation, according to Suitable content, at least partly be adjusted can for content in virtual session scene
To include:
Obtain the terminal device of the first user corresponding with the setting user emotion information of mood condition is met;It obtains virtual
First virtual objects corresponding to the terminal device of first user in session context;According to Suitable content, to the first virtual objects
It is adjusted.
Wherein, server can obtain the user emotion information according to the user emotion information for meeting setting mood condition
The terminal device of corresponding first user.In the available virtual session scene of server corresponding to the terminal device of the first user
The first virtual objects can be to the first virtual objects using first virtual objects as at least partly content that adjusts of needs
At least one of carry out clarity adjustment, brightness adjustment, replace, block, for example, can be by the brightness tune of the first virtual objects
Height, in another example, the first virtual objects are blocked with specified virtual content, are not limited thereto.
Therefore, after being adjusted the first virtual objects, can make in remote session scene in addition to the first user
Other users, view the adjustment of the corresponding virtual objects of terminal device of the first user, user facilitated to recognize the first use
The mood at family.
For example, referring to Fig. 6, in teleconference scene, the corresponding virtual session scene of teleconference scene includes void
Anthropomorphic object A, virtual portrait B, virtual portrait C and virtual portrait D, virtual portrait A is corresponding with the corresponding terminal device of user 1, empty
Anthropomorphic object B is corresponding with the corresponding terminal device of user 2, and virtual portrait C is corresponding with the corresponding terminal device of user 3, virtual portrait D
It is corresponding with the corresponding terminal device of user 4, it, then can be with when the corresponding user emotion information of the terminal device of user 2 is sad
The brightness of virtual portrait B is turned up, makes user 1, user 3 and user 4 it can be seen that empty in the scenic picture of virtual session scene
Anthropomorphic object B is more prominent, and user 1, user 3 and user 4 is facilitated to recognize the mood of user 2.
As another embodiment, according to Suitable content, at least partly content in virtual session scene is adjusted
May include:
Obtain the first user corresponding with the setting user emotion information of mood condition is met;It obtains in virtual session scene
With the second user of the first user-association, and the second virtual objects corresponding to the terminal device of second user are obtained;According to tune
Whole content is adjusted the second virtual objects.
In some embodiments, the corresponding user emotion information of the first user meets setting mood condition.Virtual session
Second user and the first user-association in scene.As an implementation, second user can be the use for influencing the first user
The user of family emotional information makes the first user generate the user emotion for meeting setting mood condition for example, second user can be
The user of information, is not limited thereto.Wherein, server can be according to the participation of terminal device in user data, remote session
The determining second user with the first user-association such as data, for example, can determine the first use according to the eye data of the first user
The direction of gaze at family, with the focus content that determining first user watches attentively, so that it is determined that go out the second user with the first user-association,
In another example can identify according to the voice data of the first user and obtain keyword, determine to close with the first user according to keyword
The second user of connection, is not limited thereto.
Second virtual objects corresponding to the terminal device of second user in the available virtual session scene of server, will
Second virtual objects can be adjusted the second virtual objects as at least partly content that adjusts of needs, for example, can be with
The clarity of second virtual objects is turned down, by the specified virtual content replacement etc. of the second virtual objects, herein not as restriction.
Therefore, after being adjusted the second virtual objects, can to avoid the first user in remote session scene mood by second
The influence of user.
For example, the corresponding virtual session scene of teleconference scene includes virtual portrait A and void in teleconference scene
Anthropomorphic object B, virtual portrait A are corresponding with the corresponding terminal device of user 1, virtual portrait B terminal device pair corresponding with user 2
Answer, when the corresponding user emotion information of the terminal device of user 2 is indignation, user 2 is angry due to user 1, then can with
The corresponding terminal device in family 2 needs to turn down the clarity of virtual portrait A in the picture of virtual session scene to be shown,
User 1 is reduced to interfere the mood of user 2.
In some embodiments, the user data of the terminal device in remote session may include voice data.The void
The processing method of quasi- scene can also include:
The corresponding voice data of the corresponding terminal device of user emotion information for meeting setting mood condition is obtained, judgement obtains
Whether the decibel value of the voice data taken is greater than given threshold, and the first user is to believe with the user emotion for meeting setting mood condition
Cease corresponding user;When decibel value is greater than given threshold, the decibel value of the corresponding voice data of acquisition is reduced.
Wherein, server can also obtain the user when determining to meet the user emotion information of setting mood condition
The corresponding voice data of the corresponding terminal device of emotional information generates the voice number of the first user of the user emotion information
According to and determining whether the decibel value of voice data for generating the first user of the user emotion information is greater than given threshold, when the
When the decibel value of the voice data of one user is greater than given threshold, then it represents that the volume that the first user speaks is larger, therefore, service
Device can reduce the decibel value of the corresponding voice data of terminal device of the first user in remote session, so as not to other users by
The influence of the mood of first user.
In some embodiments, server may be used also when determining to meet the user emotion information of setting mood condition
To determine with whether the decibel value of the voice data of the second user for the first user-association for generating the user emotion information is greater than
It is corresponding eventually can to reduce second user in remote session when the voice data of second user is greater than given threshold for given threshold
The decibel value of the voice data of end equipment, to reduce influence of the second user to the mood of the first user.
In the embodiment of the present application, above-mentioned steps can also be executed by terminal device.
The virtual session scene in remote session may be implemented in the processing method of virtual scene provided by the embodiments of the present application
In, according to the matched Suitable content of user emotion information institute for meeting setting mood condition, to virtual right in virtual session scene
As being adjusted, it can reflect user emotion, convenient for mutually understanding the mood of user in remote session between user, can also subtract
Few mood to user is interfered, and the effect of remote session is promoted.
Referring to Fig. 7, the another embodiment of the application provides the processing method of virtual scene, the processing side of the virtual scene
Method may include:
Step S310: generating the corresponding virtual session scene of remote session, and virtual session scene includes at least remote session
In the corresponding virtual objects of one or more terminal device.
Step S320: the corresponding user data of one or more terminal devices is obtained.
Step S330: analyzing user data, obtains the corresponding user emotion information of one or more terminal devices.
In the embodiment of the present application, step S310 to step S330 can be refering to the content of above-described embodiment, herein no longer
It repeats.
Step S340: when user emotion information meets setting mood condition, the use with satisfaction setting mood condition is obtained
The matched Suitable content of family emotional information, Suitable content include the first content data of virtual tag content, virtual tag content
It is corresponding with the setting user emotion information of mood condition is met.
In some embodiments, Suitable content may include the first content number of the virtual tag content for label
According to;Suitable content can also include being used to indicate the instruction information that the content in virtual session scene is marked, to take
Business device can be marked the content in virtual scene according to Suitable content.Virtual tag content is used to mark mood, so as to
Other users in remote session recognize the mood of user.As an implementation, virtual tag content can be and expire
The corresponding word content of user emotion information of foot setting mood condition, for example, when user emotion information is indignation, virtual tag
Content can be the word contents such as " indignation ", " angry ".As another embodiment, virtual tag content can be and satisfaction
The corresponding virtual full animation expression of user emotion information of mood condition is set, virtual full animation expression is for embodying user emotion letter
Breath, for example, virtual tag content can be sad virtual full animation expression when user emotion information is sad.
Step S350: the terminal for obtaining the first user corresponding with the setting user emotion information of mood condition is met is set
It is standby.
Step S360: corresponding first virtual objects of terminal device of the first user in virtual session scene are obtained.
Step S370: it according to first content data, in virtual session scene at the position of the first virtual objects, generates empty
Quasi- label content, virtual tag content are used for the mood of the first user of label.
In some embodiments, server can be virtually right in the terminal device corresponding first for getting the first user
As later, then the content that the first virtual objects can be marked in virtual session scene as needs.Therefore, server
The first virtual objects can be marked according to first content data in Suitable content, i.e., in generation virtual session scene
Virtual tag content, virtual tag content may be at first in relative virtual session context in the position in virtual session scene
In the preset range of virtual objects.Wherein, the position of virtual tag content refers to space of the virtual tag content in Virtual Space
Position.As an implementation, server can determine empty according to spatial position of first virtual objects in Virtual Space
The spatial position (such as close to the spatial position of the first virtual objects in Virtual Space) in Virtual Space, root are dissolved in quasi- label
According to spatial position of the virtual tag content in Virtual Space and first content data, virtual session is generated in Virtual Space
Virtual tag content in scene.Therefore, the corresponding virtual tag content of the first virtual objects is generated in virtual session scene
It later, can be in order to mood that the other users in remote session in addition to the first user recognize the first user.
For example, referring to Fig. 8, in teleconference scene, the corresponding virtual session scene of teleconference scene includes void
Anthropomorphic object A, virtual portrait B, virtual portrait C and virtual portrait D, virtual portrait A is corresponding with the corresponding terminal device of user 1, empty
Anthropomorphic object B is corresponding with the corresponding terminal device of user 2, and virtual portrait C is corresponding with the corresponding terminal device of user 3, virtual portrait D
It is corresponding with the corresponding terminal device of user 4, it, then can be with when the corresponding user emotion information of the terminal device of user 3 is sad
The virtual expression for indicating sad is generated at the position of virtual portrait C, and user 1, user 3 and user 4 is facilitated to recognize user's 2
Mood.
In the embodiment of the present application, the step of step S310 to step S370, can also be executed by terminal device.
The virtual session scene in remote session may be implemented in the processing method of virtual scene provided by the embodiments of the present application
In, it is virtual right in virtual session scene according to the matched Suitable content of user emotion information institute for meeting setting mood condition
Virtual tag content is generated at the position of elephant, be can reflect user emotion, is used convenient for mutually understanding in remote session between user
The mood at family promotes the effect of remote session.
Referring to Fig. 9, the application another embodiment provides the processing method of virtual scene, the processing side of the virtual scene
Method may include:
Step S410: generating the corresponding virtual session scene of remote session, and virtual session scene includes at least remote session
In the corresponding virtual objects of one or more terminal device.
Step S420: the corresponding user data of one or more terminal devices is obtained.
Step S430: analyzing user data, obtains the corresponding user emotion information of one or more terminal devices.
In the embodiment of the present application, step S410 to step S430 can be refering to the content of above-described embodiment, herein no longer
It repeats.
Step S440: when user emotion information meets setting mood condition, the use with satisfaction setting mood condition is obtained
The matched Suitable content of family emotional information, Suitable content include the second content-data of virtual cue content.
In some embodiments, Suitable content may include the second content-data of virtual cue content;Suitable content
It can also include being used to indicate the instruction information that virtual cue content is generated in virtual session scene, so that server can root
Virtual cue content is generated in virtual scene according to Suitable content.Virtual cue content is used to generate satisfaction setting mood condition
User emotion information the first user terminal device in show, with prompt the first user mood, facilitate the first use
Family understands the mood of oneself.As an implementation, virtual cue content can be the user with satisfaction setting mood condition
The corresponding text prompt content of emotional information, for example, virtual cue content can be for " you send out when user emotion information is indignation
The word contents such as anger, the mood that please be contain oneself ".As another embodiment, virtual tag content can be and satisfaction
The corresponding virtual animation of user emotion information of mood condition is set, virtual full animation expression is for embodying user emotion information, example
Such as, when user emotion information is absent-minded, virtual tag content can be to indicate absent-minded virtual animation.
Step S450: the terminal for obtaining the first user corresponding with the setting user emotion information of mood condition is met is set
It is standby.
Step S460: virtual cue content is generated according to the second content-data, virtual cue content is used in the first user
Terminal device in show, with prompt the first user mood.
In some embodiments, server can be virtually right in the terminal device corresponding first for getting the first user
It, can be according to the second content-data, in the scene for the virtual session scene that the terminal device for the first user is shown as later
Virtual cue content is generated in picture.The scene for the virtual session scene that server is shown in the terminal device for the first user
After generating virtual cue content in picture, the picture data of scenic picture can be sent to the terminal device of the first user, with
Just the first user views virtual cue content by terminal device, facilitates the first user to recognize the mood of oneself, to feelings
Thread is restrained, to preferably be linked up in remote session with other users.
In some embodiments, the processing method of the virtual scene can also include:
The user emotion information of other terminal devices in remote session in addition to target device is obtained, target device is identity
It is identified as the terminal device of default mark;Feedback data is generated according to the user emotion information of other terminal devices, and according to anti-
It presents data and generates emotional feedback picture, emotional feedback picture in target device for showing.
Wherein it is possible to which presetting the terminal device that identity is default mark is target device, server can root
According to the user emotion information of terminal device each in remote session, user's feelings of other terminal devices in addition to target device are obtained
Thread information gets the user emotion information of other users, generate mood according to the user emotion information of other terminal devices
Feedback data.Emotional feedback data can characterize the user emotion information of other users, for example, it may be word content, image
Content etc..The picture data for the emotional feedback picture that server is generated according to emotional feedback data, can be transmitted to target device,
To which target device can show emotional feedback picture according to the picture data of emotional feedback picture, facilitate target device corresponding
User understand other users in each user mood.
For example, referring to Figure 10, in the session context of remote teaching, the corresponding terminal device of teacher can set for target
It is standby, according to the emotional feedback picture that the user emotion information of student a1, student a2 and student a3 generate, can be shown by target device
Show, the mood, state and teaching atmosphere of each student are understood convenient for teacher.
In some embodiments, other terminal devices in remote session in addition to target device can also be defeated by user
The information such as the view, the argument information that enter are transmitted to server, server can according to information such as view, the argument informations of user,
Virtual screen is generated, which is shown by target device, consequently facilitating the user of target device recognizes other users
View and opinion.
Certainly, above-mentioned server generates the embodiment of emotional feedback picture, can also carry out in the aforementioned embodiment.
In the embodiment of the present application, the step of above-mentioned server executes, can also be executed by terminal device.
The virtual session scene in remote session may be implemented in the processing method of virtual scene provided by the embodiments of the present application
In, according to the matched Suitable content of user emotion information institute for meeting setting mood condition, generated in virtual session scene empty
Quasi- suggestion content, prompts the user emotion of user, user is facilitated to recognize the mood of oneself, to restrain to mood, thus
It is preferably linked up in remote session with other users, promotes the effect of remote session.
Referring to Figure 11, it illustrates a kind of structural block diagrams of the display device 400 of virtual content provided by the present application.It should
The display device 400 of virtual content include: scenario generating module 410, data acquisition module 420, mood analysis module 430 and
Scene adjusts module 440.Wherein, scenario generating module 410 is for generating the corresponding virtual session scene of remote session, virtual meeting
It talks about scene and includes at least the corresponding virtual objects of one or more terminal device in remote session;Data acquisition module 420 is used
In the corresponding user data of acquisition remote equipment;Mood analysis module 430 for analyzing user data, obtain one or
The corresponding user emotion information of multiple terminal devices;Scene adjusts module 440 and is used to meet setting mood when user emotion information
When condition, the Suitable content with the user emotion information matches for meeting setting mood condition is obtained, and according to Suitable content to void
Quasi- session context is adjusted.
In some embodiments, user data includes the face image of user.Mood analysis module 430 can be used specifically
In: according to face image, obtain at least one of the corresponding user's expression of one or more terminal devices and user's look;According to
At least one of user's expression and user's look obtains the corresponding user emotion information of one or more terminal devices.
In some embodiments, user data includes the voice data of user.Mood analysis module 430 can be used specifically
In: the corresponding user's tone of one or more terminal devices is obtained according to voice data;One or more is obtained according to user's tone
The corresponding user emotion information of a terminal device.
In some embodiments, scene adjustment module 440 can be specifically used for: obtaining and meet setting mood condition
Corresponding first user of user emotion information;The second user in virtual session scene with the first user-association is obtained, and is obtained
Second virtual objects corresponding to the terminal device of second user;According to Suitable content, the second virtual objects are adjusted.
In some embodiments, scene adjustment module 440 is adjusted virtual session scene, comprising: adjustment is virtual
The clarity of at least partly content in session context, the brightness of at least partly content in adjustment virtual session scene, to void
At least partly content in quasi- session context is replaced and blocks at least partly content in virtual session scene
At least one of.
In some embodiments, Suitable content includes the first content data of virtual tag content, virtual tag content
It is corresponding with the setting user emotion information of mood condition is met.Scene adjustment module 440 can be specifically used for: acquisition is set with satisfaction
The terminal device of corresponding first user of the user emotion information of thread of pledging love condition;Obtain the first user in virtual session scene
Corresponding first virtual objects of terminal device;According to first content data, the position of the first virtual objects in virtual session scene
Place is set, virtual tag content is generated, virtual tag content is used for the mood of the first user of label.
In some embodiments, Suitable content includes the second content-data of virtual cue content.Scene adjusts module
440 can be specifically used for: obtain the terminal device of the first user corresponding with the setting user emotion information of mood condition is met;
Virtual cue content is generated according to the second content-data, virtual cue content is used to show in the terminal device of the first user,
To prompt the mood of the first user.
In some embodiments, the processing unit 400 of virtual scene can also include: emotional information obtain module and
Feed back picture generation module.Emotional information obtains module and is used to obtain other terminal devices in remote session in addition to target device
User emotion information, target device is the terminal device that identity is default mark;It feeds back picture generation module and is used for root
Feedback data is generated according to the user emotion information of other terminal devices, and emotional feedback picture, mood are generated according to feedback data
Feedback picture in target device for showing.
In some embodiments, user data includes voice data.The processing unit 400 of virtual scene can also wrap
It includes: voice extraction module, phonetic decision module and voice adjustment module.Voice extraction module meets setting mood for obtaining
The voice data of the corresponding terminal device of user emotion information of condition;Phonetic decision module is used for the voice data for judging to obtain
Decibel value whether be greater than given threshold;Voice adjusts module and is used to reduce the language of acquisition when decibel value is greater than given threshold
The decibel value of sound data.
To sum up, scheme provided by the present application, by generating the corresponding virtual session scene of remote session, virtual session scene
Including at least the corresponding virtual objects of one or more terminal device in remote session, obtains one or more terminal devices and adopt
The user data of collection, analyzes user data, obtains the emotional information of the corresponding user of one or more terminal devices, when
When user emotion information meets setting mood condition, the adjustment with the user emotion information matches for meeting setting mood condition is obtained
Content, and virtual session scene is adjusted according to Suitable content, the mood of user in remote session can be fed back,
Really experience to user, promotes the effect of remote session.
In the embodiment of the present application, the electronic equipment for executing the processing method of virtual scene provided by the above embodiment can be with
It is server, is also possible to terminal device.
Figure 12 is please referred to, it illustrates a kind of structural block diagrams of terminal device provided by the embodiments of the present application.The terminal is set
Standby 100, which can be smart phone, tablet computer, head-wearing display device etc., can run the terminal device of application program.The application
In terminal device 100 may include one or more such as lower component: processor 110, memory 120 and one or more are answered
With program, wherein one or more application programs can be stored in memory 120 and be configured as by one or more
It manages device 110 to execute, one or more programs are configured to carry out the method as described in preceding method embodiment.
Processor 110 may include one or more processing core.Processor 110 is whole using various interfaces and connection
Various pieces in a terminal device 100, by run or execute the instruction being stored in memory 120, program, code set or
Instruction set, and the data being stored in memory 120 are called, execute the various functions and processing data of terminal device 100.It can
Selection of land, processor 110 can use Digital Signal Processing (Digital Signal Processing, DSP), field-programmable
Gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic
Array, PLA) at least one of example, in hardware realize.Processor 110 can integrating central processor (Central
Processing Unit, CPU), in image processor (Graphics Processing Unit, GPU) and modem etc.
One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for
Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem
It can not be integrated into processor 110, be realized separately through one piece of communication chip.
Memory 120 may include random access memory (Random Access Memory, RAM), also may include read-only
Memory (Read-Only Memory).Memory 120 can be used for store instruction, program, code, code set or instruction set.It deposits
Reservoir 120 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system
Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for
Realize the instruction etc. of following each embodiments of the method.Storage data area can be created in use with storage terminal device 100
Data etc..
In some embodiments, terminal device 100 can also include imaging sensor 130, for acquiring real world object
Image and the scene image for acquiring target scene.Imaging sensor 130 can be infrared camera, be also possible to visible light phase
Machine, concrete type are not intended as limiting in the embodiment of the present application.
In one embodiment, terminal device is head-wearing display device, in addition to including above-mentioned processor, memory and figure
As outside sensor, may also include following one or more components: display module, optics module, communication module and power supply.
Display module may include display control unit.Display control unit is used to receive the virtual content after processor rendering
Display image, then the display image is shown and is projected in optics module, allows users to watch by optics module
To virtual content.Wherein, display device can be display screen or grenade instrumentation etc., can be used for showing image.
Off-axis optical system or waveguide optical system can be used in optics module, and the display image that display device is shown is through optics
After mould group, the eyes of user can be projected to.User is seeing the display image of display device projection by optics module
Simultaneously.In some embodiments, user can also observe actual environment through optics module, experience virtual content and reality
The superimposed augmented reality effect of environment.
Communication module can be bluetooth, WiFi (Wireless-Fidelity, Wireless Fidelity), ZigBee (purple peak technology)
Etc. modules, head-wearing display device, which can pass through communication module and terminal device, to be established and communicates to connect.It is connect with terminal equipment in communication
Head-wearing display device can carry out the interaction of information and instruction with terminal device.For example, head-wearing display device can be by logical
The image data for believing the transmission of module receiving terminal apparatus, according to the virtual content of institute's received image data generation virtual world
It is shown.
Power supply can be powered for entire head-wearing display device, guarantee the normal operation of head-wearing display device all parts.
Referring to Figure 13, it illustrates a kind of structural block diagrams of server provided by the embodiments of the present application.The server 200
Can may include one or more for Cloud Server, local server etc., server 200 such as lower component: processor 210 be deposited
Reservoir 220 and one or more application program, wherein one or more application programs can be stored in memory 220 simultaneously
It is configured as being executed by one or more processors 210, one or more programs are configured to carry out such as preceding method embodiment
Described method.
Figure 14 is please referred to, it illustrates a kind of structural frames of computer readable storage medium provided by the embodiments of the present application
Figure.Program code is stored in the computer readable storage medium 800, program code can be called by processor and execute the above method
Method described in embodiment.
The computer readable storage medium 800 can be (the read-only storage of electrically erasable of such as flash memory, EEPROM
Device), the electronic memory of EPROM, hard disk or ROM etc.Optionally, computer readable storage medium 800 includes non-volatile
Property computer-readable medium (non-transitory computer-readable storage medium).It is computer-readable
Storage medium 800 has the memory space for the program code 810 for executing any method and step in the above method.These program generations
Code can read or be written to the production of this one or more computer program from one or more computer program product
In product.Program code 810 can for example be compressed in a suitable form.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although
The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with
It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And
These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and
Range.